HIFI Data Reduction telecon - Monday 25 July - 5 pm CEST
Present: MaxA, SandrineB, JorgeP, RussS, MihkelK, ElenaP, Velu, DavidT
2. Follow up of Sandrine's presentation at HIPE forum:

We walked through the presentation given by Sandrine at the HIPE forum

and commented on the issues she raised back then. The presentation can

be found at:

Data access:

Suggestion to turn getObservation into a task, whereby the content of given pools could

be browsed and directly read. Supposed to have been followed up at the Obs. Mngt follow

up splinter at the HIPE forum, but it does not appear in the minutes.
Jorge: I prefer to work from the command line to read data (i.e. no GUI)


- a GUI would be helpful

- there is a general problem with the versions in a given pool. The

version numbering is not intuitive (was once explained to him by Paul

Balm but not straight-forward). The consequence is that using the getObservation

does not always pick up the latest version. The GUI approach would be one way to

better visualise and select versions in a more controlled way

- the product browser shows versions but their numbering is "random" and there is

no time-tag to filter what is the latest one


- uses one pool per version to circumvent the above

- prefers to work with command line too

- he is using labels for versioning
We discussed whether getObservation could then maybe have a selection keyword based on labels

(some recall this was touched upon at the HIPE forum but no clear memory of what was said

Russ suggests that Bruno sets up a dedicated telecon between the developer (Marco Soldati) and

the users.
CalTree version and repipelining:

Sandrine reported inacurrate/missing informtion about the caltree versions on the HIFI webpages

at HSC. This has now been improved with more information about the version number and content of

the updates. Users were surprised to not see a cal version associated to each HIPE release. Russ

recalls that they are not necessarily updated together. Sometimes a new HIFI caltree becomes

applicable from a given OD onwards, while no HIPE update takes place - there can als be HIPE

updates with no new caltree (this is why e.g. HIPE 7 did use the caltree released with HIPE 6.1).
It is also recalled that the HIFI caltree always contains all the previous versions, so that

having the latest version implies that one has all versions and therefore can reprocess

with older caltree.
Future work will include:

- a clear versioning number for the caltree in the meta-data, also applicable when reprocessing

is done with a different cal version

- more detailed release notes for each caltree

- an automatic updater (not as fancy as what PACS uses, but this is not compatible with

the caltree structure of HIFI). There is a proposal to have a sort of icon in HIPE informing

about whether the version in use is out-dated and would allow easy update on-the-fly

- it is agreed that a new keyword of the HifiPipeline task could be proposed to chose the

calVersion to use, specifyin with the unique identifier number

It is agreed by users that there are more flags than what they really need in practice.

Two main types of flag are needed by non-expert users:

- flag to simply kill channels from the products

- flag to mask certain channel ranges so that they are ignored by certain tasks

(e.g. strong lines in FitBaseline, etc)
It is proposed to add a new keyword to the flagging task, to be able to select either one

of the above. It is TBD what number those flags should have, in particular if one wants to

ensure that they will be honoured by the different tasks further involved.
Flags associated to the data-frames are hard to interpret because they contain several

flags at the same time. A new functionality will be available in HIPE 8, where the bit

map of the flags (following the table in http://www.sron.rug.nl/docserver/wiki/doku.php?id=docbook:hifi-um:crowflags) will be provided.
It is considered a "bug" to be able to apply PACS flags to HIFI data. An SPR will be raised.

Averaging data can be done in several ways:

1. for H+V, use polarPair+avg -> only for pair of spectra. polarPair does the resampling


2. for any pair of spectra, Resample+PairAvg -> only for pair of spectra. Resampling is

responsibility of user

3. for an array of spectra, Resample+Avg. Spectra needs not to overlap, but target grid

is to be made up by user
Steps 1 and 2 seem to do the same thing, but unclear in doc if one is preferred or what the

differences are.
Mihkel: wrote a personal scripts to be able to average data over large frequency ranges

Jorge: wrote a personal scripts to deal with spectra at different LO settings
Overall it seems clear that we miss a generic functionality to average spectra over

overlapping - or not - frequency areas, from the same obsid, or not necessarily (e.g.

spectral scans). This is particularly important e.g for KP working with mini-SScans

(as complement to the deconvolution).
We proposed the following:

- make PairAvg offer a resampling option (i.e. porting some of the resampling task)

- have a generic task working on more arrays, and allowing a dynamical resampling

(i.e. allow to build automatically the target common grid following a handful of options,

e.g. "truncate" (only overlapping range of all spectra), "within a given freq range", etc)

-> there are already two prototypes (that we know of) in the CHESS program, writen by

Damien Rabois and Mihkel resp. Sandrine has to test it still. This could be used as basis

for the proposed new functionality. An SCR will be raised to collect specifications.
Fitter GUI:

Despite the documentation, Sandrine reports problems to get to a line fit.

- masking by manual entry does not work for her (Max did it on-the-fly during

during the telecon but uses HIPE 8, while Sandrine used HIPE 7 - any link ?)

- the GUI still contains some confusing items:

  • one button per polynom order is superfluous

  • two accepts button (apparently talking to different layers of the task)

  • Reset/New Instance does not work to get back original data
Max and Russ report that they rather use the command line approach for fitting. Would it be

more efficient on the short term to have a template code (there may be one in the various

workshop packages) ?
Russ will contact Rob and Carolyn about the points raised by Sandrine.

Sandrine prepared a tutorial on how to use PACS/SPIRE data in CASSIS (see cassis.cesr.fr).

She is working on a similar thing for HIFI data.
Channel weighting in deconv:

Documentation is apparently not clear about how weights are used and what the various

types of weights mean. David will contact Carolyn.
User scripts:

Mihkel is still in iteration with Colin for an enhanced version of the script allowing to

inspect contamination in OFFs for DBS. It currently only works for PointDBS.
In the meantime there is a script that was distributed in the framework of the DP workshops.
Although it was mentioned at the Spectral mapping telecon, users on the phone were not aware

of a similar script to potentially correct for OFF contamination in OTF maps. David will ask

Ambiguity of information about LO frequency:

- at level1, both meta-data and individual DF have the LO frequency

reported as the value that was tuned by the instrument. It is called

resp. "loFrequency" and "LoFrequency"

- at level1, the individual DF move the tuned LOF into "LoFrequency_measured"

and replace the value in "LoFrequency" by the S/C-velocity-corrected frequency.

The meta-data "loFrequency" is however kept to what is now given in "LoFrequency_measured".
Ideally, the value stored as meta-data for "loFrequency" should be the one after S/C velocity

correction. However this correction is not strictly the same for all the DF contained in e.g.

a long observation. It is TBC that this variation in time would be at all visible in our LOF

over a long obsid, but if confirmed then the proposed alternative is to ensure a fixed meta-data

value by assigning the value pre-correction. IF so we propose that this meta-data entry is renamed

"loFrequency_measured" once the S/C velocity is corrected
Max was asked to look into worst-case scenario LOF variation over a group of data-frames due to

S/C velocity correction.
Saving data:

There are various options to save the data:

- localStoreWriter

- poolDataWriter

- saveProducts

- saveObservation (counterpart of getObservation - one pool per obsid.)
It was found unclear from the doc what are the advantages/disadvantages of the various

options, esp. if some methods are here for historical reasons but should not be the

preferred ones.
Also no tooltips given on saveProducts. An SxR on doc shall be raised.
3. Things that users can do:

Considered covered by the discussion items above.
4. Things that users cannot do:

Partly covered by the discussion items above. Mihkel and Max however have a couple

of more items that they had trouble with. For some of them, they wrote dedicated scripts

for that.
- Mihkel:

  • tool to store the output of deconv into obscontext (multiple SScan e.g.)

  • some private scripts for line fitting

- Max:

  • would like getObservation/saveObservation to use labels as allowed in the

ProductStorage().saveAs(obscontext, label) method. So that modified obscontexts

could be written back into the same store; hopefully much more diskspace-efficient.

  • also there could be a listObservation(label=..., poolName='...') that would just

print the labels, obsids, and modification dates in all or some of your pools.
5. Question/doubts:

Considered covered by the discussion items above.
6. Next telecon:

A doodle pool will be sent by Bruno after the summer.

blog comments powered by Disqus