Skip to content

Commit

Permalink
Clean code and organize files for v1.0. Eliminate all GUI references,…
Browse files Browse the repository at this point in the history
… and remove some old code and unused imports
  • Loading branch information
jmoldon committed May 2, 2019
1 parent cb29e34 commit 610fb59
Show file tree
Hide file tree
Showing 10 changed files with 115 additions and 541 deletions.
2 changes: 1 addition & 1 deletion default_params.json
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@
"p_scan_minsnr" : 2,
"ap_scan_tablename" : "phscal_ap_scan.G3",
"ap_scan_prev_cal" : ["bpcal.BP2", "allcal_d.K1", "allcal_p.G3"],
"ap_scan_calibrator" : "default",
"ap_scan_calibrator" : "phscals",
"ap_scan_solint" : "inf",
"ap_scan_spw" : ["*", "innerchan"],
"ap_scan_combine" : "",
Expand Down
21 changes: 10 additions & 11 deletions documentation/docs.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,5 @@
<!---
I use grip to convert to html using: grip docs.md --export --title "e-MERLIN CASA pipeline"
It can also be converted to pdf using: http://www.markdowntopdf.com/
-->

# e-MERLIN CASA pipeline
### Documentation for v1.0.0
### Documentation for v1.0

---
# Table of contents
Expand Down Expand Up @@ -52,17 +47,16 @@ You are ready to execute the pipeline. If `casa` points to casa 5.4 version:
- Create the working path where you will work, and copy the `inputs.txt` to your working path.
- Edit the inputs.txt file: fill the `fits_path` (where the fits files are located) and the `inbase` (any name you want to give to your project).
- Fill in the inputs.txt file the names of your fields: `targets`, `phscals`, `fluxcal`,`bpcal`,`ptcal`.
- Leave all other parameters as default.
- A MS `<inbase>.ms` will be produced
- Data will be converted to MS and prepared. Open in a web browser the file `./weblog/index.html`.
- The weblog and the plots will be updated every time a step is finished.
- Leave all other parameters as default so all steps will be executed.
- Data will be converted to MS and prepared. Once data is loaded you can open in a web browser the file `./weblog/index.html`.
- The weblog and the plots will be continuosly updated every time a step is finished.
- If you have a list of manual flags to apply, write them in `inputfg.flags` to flag data before it is averaged in the next step.
- If `average = 1` a new MS will be produced with name `<inbase>_avg.ms`.

### Data calibration
- You may want to include additional manual flags in `./inputfg_avg.flags`
- Run all the calibration steps by setting them to 1. You may prefer to run each of them one by one and check the output plots.
- It is a good practice to redo the calibration once you are happy with your flags and you are sure of all the steps.
- It is a good practice to redo the calibration once you are happy with your flags and parameters.
- The whole calibration process can be repeat from scratch by setting all the steps in the Calibration section to 1, including `restore_flags` which will restore the flag status when the data was averaged.


Expand Down Expand Up @@ -128,6 +122,11 @@ This inputs just select which of the processing steps will be executed:

Additionally, steps that produce calibration tables can be set to also apply the calibration up to that point, so also modifying the corrected column. A value of 2 means run the step and apply the calibration up to that step. The standard procedure is to apply everything at the end of the calibration with the step `applycal_all = 1`, so setting 2 to any previous step is useful just to check the calibration up to each step.

### Default parameters

All the parameters used by the pipeline are controlled by the file `default_param.json` that will be loaded as a python dictionary. That file can be either at your working directory, or the one in the directory `eMERLIN_CASA_pipeline` will be used.

The parameters are divided in sections according to which pipeline steps they are controlling. Some parameters control or select what needs to be run, but in most cases a parameter translates directly to the CASA parameter that will be used. For example the parameter `defaults['gaincal_final']['ap_solint']`` will set the solution interval for the `ap` calibration in the step `gaincal_final`.

---

Expand Down
74 changes: 51 additions & 23 deletions eMERLIN_CASA_pipeline.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,6 @@
import os,sys,math
import numpy as np
import pickle
from Tkinter import *
import getopt
import logging
import collections
Expand All @@ -14,7 +13,7 @@
import casadef


current_version = 'v0.10.24'
current_version = 'v0.10.25'

# Find path of pipeline to find external files (like aoflagger strategies or emerlin-2.gif)
try:
Expand All @@ -27,10 +26,9 @@
pipeline_path = pipeline_path + '/'
sys.path.append(pipeline_path)

import functions.eMERLIN_CASA_functions as em
import functions.weblog as emwlog
import functions.eMERLIN_CASA_plots as emplt
#from default_params import defaults
import functions.eMCP_functions as em
import functions.eMCP_weblog as emwlog
import functions.eMCP_plots as emplt

casalog.setlogfile('casa_eMCP.log')

Expand Down Expand Up @@ -65,7 +63,7 @@ def get_pipeline_version(pipeline_path):
return branch, short_commit


def run_pipeline(inputs=None, inputs_path=''):
def create_dir_structure(pipeline_path):
# Paths to use
weblog_dir = './weblog/'
info_dir = './weblog/info/'
Expand All @@ -82,37 +80,67 @@ def run_pipeline(inputs=None, inputs_path=''):
em.makedir(images_dir)
em.makedir(logs_dir)
em.makedir(plots_dir+'caltables')
os.system('cp -p {0}/emerlin-2.gif {1}'.format(pipeline_path, weblog_dir))
os.system('cp -p {0}/eMCP.css {1}'.format(pipeline_path, weblog_dir))
pipeline_version = current_version
os.system('cp -p {0}/utils/emerlin-2.gif {1}'.format(pipeline_path, weblog_dir))
os.system('cp -p {0}/utils/eMCP.css {1}'.format(pipeline_path, weblog_dir))
return calib_dir, info_dir

# Continue with previous pipeline configuration if possible:
def start_eMCP_dict(info_dir):
try:
eMCP = load_obj(info_dir + 'eMCP_info.pkl')
except:
eMCP = collections.OrderedDict()
eMCP['steps'] = em.eMCP_info_start_steps()
eMCP['img_stats'] = collections.OrderedDict()
return eMCP

eMCP['inputs'] = inputs
def get_logger(
LOG_FORMAT = '%(asctime)s | %(levelname)s | %(message)s',
DATE_FORMAT = '%Y-%m-%d %H:%M:%S',
LOG_NAME = 'logger',
LOG_FILE_INFO = 'eMCP.log',
LOG_FILE_ERROR = 'eMCP_errors.log'):

# Setup logger
logger = logging.getLogger('logger')
log = logging.getLogger(LOG_NAME)
log_formatter = logging.Formatter(fmt=LOG_FORMAT, datefmt=DATE_FORMAT)
logging.Formatter.converter = time.gmtime
logger.setLevel(logging.INFO)
#logger.setLevel(logging.DEBUG)
handler = logging.FileHandler('eMCP.log', mode = 'a') # create a file handler
formatter = logging.Formatter(fmt='%(asctime)s | %(levelname)s | %(message)s',datefmt='%Y-%m-%d %H:%M:%S')
handler.setFormatter(formatter)
logger.addHandler(handler) # add the handlers to the logger
consoleHandler = logging.StreamHandler() # Stream errors to terminal also
consoleHandler.setFormatter(formatter)
logger.addHandler(consoleHandler)

# comment this to suppress console output
stream_handler = logging.StreamHandler()
stream_handler.setFormatter(log_formatter)
log.addHandler(stream_handler)

# Normal eMCP log with all information
file_handler_info = logging.FileHandler(LOG_FILE_INFO, mode='a')
file_handler_info.setFormatter(log_formatter)
file_handler_info.setLevel(logging.INFO)
log.addHandler(file_handler_info)

# eMCP_error log containing only warnings or worse
file_handler_error = logging.FileHandler(LOG_FILE_ERROR, mode='a')
file_handler_error.setFormatter(log_formatter)
file_handler_error.setLevel(logging.WARNING)
log.addHandler(file_handler_error)

log.setLevel(logging.INFO)
return log


def run_pipeline(inputs=None, inputs_path=''):
#Create directory structure
calib_dir, info_dir = create_dir_structure(pipeline_path)

# Setup logger
logger = get_logger()

# Initialize eMCP dictionary, or continue with previous pipeline configuration if possible:
eMCP = start_eMCP_dict(info_dir)
eMCP['inputs'] = inputs

try:
branch, short_commit = get_pipeline_version(pipeline_path)
except:
branch, short_commit = 'unknown', 'unknown'
pipeline_version = current_version
logger.info('Starting pipeline')
logger.info('Running pipeline from:')
logger.info('{}'.format(pipeline_path))
Expand Down
149 changes: 0 additions & 149 deletions functions/dflux.py

This file was deleted.

Loading

0 comments on commit 610fb59

Please sign in to comment.