项目作者: attipaci

项目描述 :
Data reduction and imaging for select astronomical cameras
高级语言: Java
项目地址: git://github.com/attipaci/crush.git
创建时间: 2017-11-15T02:53:46Z
项目社区:https://github.com/attipaci/crush

开源协议:GNU General Public License v3.0

下载


CRUSH

Author: Attila Kovacs
Last updated: 12 July 2019


Table of Contents

  1. Getting Started

    • 1.1. Installation
    • 1.2. (optional) Java Runtime Configuration
    • 1.3. CRUSH Pipeline Configuration
    • 1.4. The Tools of the CRUSH Suite
    • 1.5 Quick Start
    • 1.6 A Brief Description of What CRUSH Does and How…
    • 1.7 Command-line Options and Scripting Support
    • 1.8 Making Sense of the Console Output
    • 1.9 Examples
  2. Advanced Topics

    • 2.1 Pointing Corrections
    • 2.2 Pixelization and Smoothing
    • 2.3 Recovery of Extended Emission
    • 2.4 Filtering corrections, transfer functions, etc.
    • 2.5 Image Processing Post-Reduction
    • 2.6 Reducing Very Large Datasets (to do…)
  3. Advanced Configuration

    • 3.1 Commands
    • 3.2 Pipeline Configuration
    • 3.3 Source (Map) Configuration
    • 3.4 Scan Configuration
    • 3.5 Instrument Configuration
    • 3.6 Advanced startup environment and Java configuration
  4. Correlated Signals

    • 4.1 Modes and Modalities
    • 4.2 Removing Correlated Signals
    • 4.3 User-defined Modalities
  5. Custom Logging Support

    • 5.1 Log Files and Versioning
    • 5.2 Decimal Formatting of Values
    • 5.3 Loggable Quantities
  6. Supported Cameras

  7. Future Developments (A Wish List…)

    • 7.1 Support for Heterodyne Receivers and Interferometry
    • 7.2 Interactive Reductions
    • 7.3 Distributed and GPU Computing
    • 7.4 Graphical User-Interface (GUI)
  8. Further information


1. Getting Started

CRUSH-2 is a reduction and imaging package for various astronomical imaging
arrays. Version 2 provides a unified platform for reducing data from
virtually any fast-sampling scanning camera.

1.1. Installation

Install Java (if not already installed)

Download Java, e.g. from www.java.com. If you have Java already, check that
it is version 1.8.0 (a.k.a. Java 8) or later, by typing:

  1. > java -version

Note, that The GNU java a.k.a. gij (default on some older RedHat and Fedora
systems) is painfully slow and unreliable, and will not always run CRUSH
correctly. If you need Java, you can download the latest JRE from

https://java.com

Download CRUSH

You can get the latest version of CRUSH from:

http://www.sigmyne.com/crush

Linux users can grab one of the distribution packages to install CRUSH via
a package manager. Packages are provided for both RPM-based distributions
(like Fedora, Redhat, CentOS, or SUSE), and Debian-based distributions (e.g.
Ubuntu, Mint). Both will install CRUSH in /usr/share/crush. Note, if you
use one of these packages you will need root privileges (e.g. via sudo) on
the machine. Unprivileged users should install using the tarball.

For all others, CRUSH is distributed as a gzipped tarball (or as a zip
archive). Simply unpack it in the directory, where you want the crush to
reside.

A. Installation from tarball (POSIX/UNIX, incl. Mac OS X)

Unpack the tarball in the desired location (e.g. under ~/astrotools/):

  1. > cd ~/astrotools
  2. > tar xzf crush-2.xx-x.tar.gz

Verify that CRUSH works:

  1. > cd crush
  2. > ./crush

You should see a brief information screen on how to use CRUSH.

B. Installation via Linux packages

Use the package manager on your system to install. E.g., on RPM-based
systems (Fedora, RHEL, CentOS, SUSE) you may type:

  1. > sudo yum localinstall --nogpg crush-<version>-noarch.rpm

On Debian based systems (e.g. Ubuntu, Mint) the same is achieved by:

  1. > sudo apt-get install crush-<version>.deb

Or, you may simply double click the package icon from a file browser.

C. Windows Installation

Please refer to README.windows, which may be found in the
Install\windows\ subdirectory of the distribution, or online under the
‘Documents’ tab.

1.2. (optional) Java Configuration

CRUSH ships with a default Java configuration. On Windows and the most
common POSIX platforms (Linux, Mac OS X, BSD, and Solaris), it will
automatically attempt to set an optimal configuration. On other platforms,
it comes with fail-safe default values (default java, 32-bit mode and 1GB
of RAM use).

You can override the defaults by placing your settings in arbitrary files
under /etc/crush2/startup or ~/.crush2/startup (for the equivalent
configuration under Windows, please refer to README.windows)

(Any settings in the user’s home under .crush2/startup will overrride the
system-wide values in /etc/crush2/startup or C:\Program Data\startup.
If multiple config files exist in the same location, these will be parsed in
non-specific order).

E.g., placing the following lines in ~/.crush2/startup/java.conf overrides
all available runtime configuration settings:

  1. JAVA="/usr/java/latest/bin/java"
  2. USEMB="4000"
  3. JVM="-server"
  4. EXTRAOPTS="-Djava.awt.headless=true"

upon startup CRUSH will find and apply these settings, so it will use
/usr/java/latest/bin/java to run CRUSH with 4GB of RAM, using the HotSpot
server VM, and in headless mode (without display, mouse or keyboard).

Below is a guide to the variables that you can override to set your own
Java runtime configuration:

  1. JAVA Set to the location of the Java executable you want to use.
  2. E.g. "java" to use the default Java, or
  3. `/usr/java/latest/bin/java` to use the latest from Oracle or
  4. OpenJDK.
  5. USEMB Set to the maximum amount of RAM (in MB) available to
  6. CRUSH. E.g. "4000" for 4GB. On a 32-bit OS, or with a 32-bit
  7. Java installation no more than 2GB of RAM may be accesssed. In
  8. practice, the maximum 32-bit OS/Java values range from around
  9. "1000" (32-bit Windows / Java) to "1900" (32-bit Linux / Java).
  10. JVM Usually set to `-server` for Oracle or OpenJDK. If using IBM's
  11. Java, set it to "" (empty string). On ARM platforms, you
  12. probably get better performance using `-jamvm` or `-avian`. To
  13. see what VM options are available on your platform, run
  14. `java -help`. The VM options are listed near the top of the
  15. resulting help screen.
  16. EXTRAOPTS Any other non-standard options you may want to pass to the Java
  17. VM should go here.

You can also specify environment variables, and add shell scripts (bash),
since these configuation files are in fact sourced as bash scripts before
launching Java / CRUSH. For example you can add:

  1. CRUSH_NO_UPDATE_CHECK="1"
  2. CRUSH_NO_VM_CHECK="1"
  3. echo "Will try to parse my own configuration now... "
  4. if [ -f \~/mycrushconfig.sh ] ; then
  5. echo -n "OK"
  6. source \~/mycrushconfig.sh
  7. else
  8. echo -n "Not found"
  9. fi

The above will disable update checking (not recommended!) and VM checking
(also not recommended!) and will source the contents of
~/mycrushconfig.sh if and when such a file exists.

1.3. CRUSH Pipeline Configuration

The preferred way of creating user-specific configurations for CRUSH is to
place your personalized configuration files inside a .crush2/
configuration directory within your home. This way, your personalized
configurations will survive future updates to CRUSH.

You can create/edit your default configuration by editing default.cfg
(either in the installation folder or ~/.crush2, or in the appropriate
instrument subdirectory within it.
As an example a user configuration for LABOCA data can be placed into
~/.crush2/laboca/default.cfg, with the a content:

  1. datapath \~/data
  2. outpath \~/images
  3. project T-79.F-0002-2006

This tells crush that when reducing LABOCA data it should look for raw data
files in ~/data, write all output to ~/images, and specifies the default
project to be T-79.F-0002-2006

The tilde character ~ is used as a shorthand for the home directory
(similarly to UNIX shell syntax). In your configuration you can also refer
to environment variables or other settings (see more about it further below).

Of course, you can create any number of configurations, name them as you
like place them where you like (practical if you have many data locations,
projects etc.). You can easily invoke configuration files as needed via

  1. > crush [...] -config=<path-to-myconfig> [...]

1.4. The Tools of the CRUSH Suite

CRUSH provides a collection of other useful tools. Here’s a short description
of what is there and what they are used for. Each tool, when run without
options (or with the -help option) will provide you a list of available
options on the console.

  1. crush The principal reduction tool.
  2. imagetool A tool for manipulating images. Can also deal with
  3. images produced by BoA (and to some degree other
  4. images also).
  5. show A simple display program for the FITS files, with
  6. various useful functions for simple analysis and image
  7. manipulation capabilities.
  8. coadd Add FITS images together. Use as a last resort tool as
  9. it is always better to reduced scans together.
  10. difftool Allows to look at the difference between two images.
  11. (used to be named 'difference')
  12. histogram Write a histogram of the pixel distribution of an
  13. image plane. (e.g. `flux`, `rms`, `s2n`, `weight`, or
  14. `time`).
  15. detect A source extraction tool for maps. You should make
  16. sure that the noise is close enuogh to Gaussian (e.g.
  17. with `histogram`) before relying on this.
  18. esorename Rename ESO archive files to their original names.

1.5. Quick Start

To reduce data, simply specify the instrument (E.g. sharc2, laboca…)
and the scan numbers/ids (or names of the scan files or directories) to
crush. (You may have to add ./ before crush if the current directory
. is not in your path.) E.g.,

  1. > crush laboca 10059

will reduce LABOCA scan 10059 with the default reduction parameters.
(Try crush -help to see a full list of the supported instrument names.)

After the instrument name, you can specify additional options. Some of these
apply to the reduction as a whole while others will affect the scan
processing for those scan numbers that are listed after the option flag.

If you are new to CRUSH (or used version 1.xx before), you should be able
to start working with it by the time you get to the end of this page.
Nevertheless, I recommend that you read on through the entire Sections
1—2 (Getting Started & Basic Configuration), and you will become a
truly well-versed user. :-)

Once you get the hang of it, and feel like you need more tweaking ability,
feel free to read on yet further to see what other fine tuning capabilities
exist…

Here are some quick tips:

  • Reduce as many scans together as you can. E.g.

    1. > crush laboca 10657-10663 10781 11233-11235 [...]
  • You can specify opacities, pointings, scaling etc, for each
    of the set of scans listed (See section for Scan-specific options).
    E.g.,

    1. > crush laboca -tau=0.32 10657-10663 10781 -tau=0.18 11233 [...]

    will use a zenith tau value of 0.32 for 10657-10663 and 10781, and
    0.18 for the last scan (11233).

  • If you suspect that you are missing extended emission (see section
    on the Recovery of Extended Structures), then you can specify the
    ‘extended’ option, which will better preserve large scale structures
    albeit at the price of more noise. E.g.

    1. > crush laboca -extended 10657-10663
  • If your source is faint (meaning S/N < 10 in a single scan), then
    you may try using the faint option. E.g.

    1. > crush laboca -faint 10657-10663

    or,

    1. > crush laboca -faint -extended 10657-10663

    to also try preserve extended structures (see above).

  • For finer control of how much large scale structures are retained
    you can use the sourcesize option, providing a typical source
    scale (in arcsecs) that you’d like to see preserved. Then CRUSH will
    optimize its settings to do the best it can to get clean maps while
    keeping structures up to the specified scales more or less intact.
    E.g.

    1. > crush [...] -sourcesize=45.0 10657-10663

    Will optimize reduction for <~ 45 arcsec sources.

  • Finally, if your sources are not only faint, but also point-like,
    you should try the deep option. This will use the most aggressive
    filtering to yield the cleanest looking maps possible.
    E.g.,

    1. > crush laboca -deep 10657-10663

    With just these few tips you should be able to achieve a decent job in
    getting the results you seek. CRUSH can also help you reduce
    pointing/calibration, skydip, and pixelmap scans. E.g.:

  • To reduce a laboca pointing/calibration scan 11564:

    1. > crush laboca -point 11564

    At the end of the reduction CRUSH will suggest pointing corrections
    and provide detailed information on the source (such as peak and
    integrated fluxes, FWHM and elongation). Once you determine the
    appropriate pointing corrections for your science scans, you
    can apply these via the pointing=x,y option. E.g.:

    1. > crush laboca -pointing=3.2,-0.6 12069
  • You can also reduce skydips, for determining in-band atmospheric
    opacities. E.g.:

    1. > crush hawc+ -skydip 26648
  • Finally, you can derive pixel position information from appropriate
    scans, which are designed to ensure the source is moved over every
    detector, in a fully sampled manner. To reduce such beam-maps:

    1. > crush gismo -pixelmap 5707

    CRUSH will write rcp files as output, containing pixel positions
    source and sky gains in a standard 5 column APEX RCP format.
    You can feed the pixel position information to reduce other scans
    via the rcp option.

There are a lot more fine-tuning possibilities for the more adventurous. If
interested, you can find documentation further below, as well as in the
README files inside the instrument sub-directories. For a complete list of
crush options, please refer to the GLOSSARY.

1.6. A Brief Description of What CRUSH Does and How…

CRUSH is a pipeline reduction that is principally meant to remove
correlated signals (correlated noise) in the time-streams to arrive
at clean & independent bolometer signals, which are then used to make
a source model (usually an image).

As such it is not an interactive reduction software (e.g. as opposed to
e.g. BoA). The term ‘scripting’ in CRUSH mainly means defining configuration
options (in the command line or through configuration files) which are
parsed in the order they are read.

During the reduction CRUSH aims to arrive at a consistent set of
solutions for various correlated signals, the corresponding gains and
the pixel weights, as well as tries to identify and flag problematic data.

This means a series of reduction steps, which are iterated a few times
until the required self-consistent solutions are arrived at.

To learn more about the details please refer to Kovacs, A., “CRUSH:
fast and scalable data reduction for imaging arrays,” Proc. SPIE 7020, 45,
(2008). If that does not satisfy your curiousity, then you can find yet
more explanation in Kovacs, A., PhD thesis, Caltech (2006).

1.7. Command-line Options and Scripting Support

Configuration of CRUSH is available either through command line
options or via scripting. You have seen scripting already in the form
of ‘default.cfg’, which stores the default configuration values (Sec. 1.1).

The syntax is designed so that it accomodates both scripting and
command-line (bash) use alike. Thus, white spaces are optional and curved
brackets are avoided (unless these are placed in quotes).

For a complete guide on using the configuration engine, please read
README.syntax. Below we provide only a very basic overview, and describe
only what is specific to CRUSH, beyond the cofiguration engine in general.

Basic Rules

When defining settings, keys can be separated from their values either by
=, : or empty spaces (or even a combination of these).

Command line options start with a dash - in front. Thus, what may look
like:

  1. key1 value1
  2. key2 value2, value3

in a configuration script, will end up as

  1. > crush [...] -key1=value1 -key2=value2,value3 [...]

on the command line. Otherwise, they two ways of configuring are generally
equivalent to one-another. One exception to this rule is reading scans,
which is done via the read key in a script, but on command line you can
simply list the scan number (or ranges or lists or names). I.e.,

  1. [...]
  2. read 10056 # in script
  3. > crush [...] 10056 # on the command line.

In the next section you’ll find a description of the scripting keywords.
Now that you know how to use them also as command line options, you
can choose scripting or command-line, or mix-and-match them to your
liking and convenience…

Key/value pairs are parsed in the order they are specified. Thus, each
specification may override previously defined values.

Lines that start with # designate comments that are ignored by the
parser.

Dynamic Conditionals

Some conditions are interpreted dynamically. For example CRUSH, can
activate settings based on observation date (or MJD) or serial number, on
a scan-by-scan basis, such as:

  1. mjd.[54555-54869] rcp {$CRUSH}/laboca/laboca-3.rcp

which loads the pixel positions (RCP) laboca-3.rcp inside the laboca
subfolder in the crush installation for scans taken in the MJD range
54555 to 54869. Similarly,

  1. date.[2007.02.28-2009.10.13] instrument.gain -1250.0

or

  1. serial.[14203-15652] flag 112-154,173

are examples of setting activated based on dates or scan serial numbers.

You may also specify conditional settings based on the
source names under the object branch. E.g.:

  1. object.[Jupiter] bright

will invoke the bright configuration for Jupiter. The check for the
source name is case insensitive, and matches all source names that begin
with the specified sequence of characters. Thus, the SHARC-2 configuration
line:

  1. object.[PNT_] point

will perform a pointing fit at the end of the reduction for all sources
whose catalog names begin with PNT_ (e.g. PNT_3C345).

These examples also demonstrates that conditionals can be branched just
like options. (In the above cases, the conditions effectively reside in the
mjd, date or serial branches). Other commonly used conditionals of
this type are iteration based settings:

  1. > crush [...] -iteration.[last-1]whiten=2.0 [...]

will activate the noise whitening in the iteration before the last one.

Startup Configuration

At launch, crush will parse config/default.cfg to establish a default
configuration set. All configurations files are first parsed from the
files in the config/ directory inside crush, then additional options or
overriders are read also from the appropriate instrument subdirectories,
if exist.

After that, crush will check if these configuration files also exists
under the inside the ~/.crush2 directory of the user. If so, they will be
used to override again. See more on this in the next section under the
config option. In this way, the ~/.crush directory can be used as
a convenient way to create user specific setting and/or overrides to
global defaults.

1.8. Making sense of the console output

Don’t ignore the console output. It contains a lot of information, which
may be useful for diagnosing your reduction, and troubleshooting if things
don’t go as expected.

In the beginning of the console output, you will see information from each
scan as it is being read, followed by output from the preprocessing steps
(such as downsampling, scan statistics, velocity clipping, opacity used,
pointing corrections applied etc.). This part is verbose enough that it
should be eszsentially self-explanatory.

Once all the scans are loaded, and the source model (usually a map) has been
created to accomidate the dataset, CRUSH will enter the iterated pipeline
reduction. It may look cryptic at first, as it captures a lot of information
in very little console space. Below is a short guide to figure out what
it all means. Understanding it will help you diagnose and troubleshoot all
sorts of issues that might affect the reduction of your dataset. So, don’t
be shy, dive in!

An example line from the console output

Let’s begin with a concrete example. Here is a section from the console
output from the reduction of LABOCA scan 11564 (from 2007):

  1. $ Round 4:
  2. $
  3. $ [11564] D(128) C245 W(8.44E-2)245 dN(0)245 B245 c245 Map(8.44E-2)
  4. $ [Source] {check} (smooth) (clip:4.0) (blank:10.0) (sync)

Each phrase (separated by spaces around it) represents a short summary of
a pipeline step. The leading characters identify the step, often followed by
some quantitative capure (or figure-of-merit) summarizing what the step
accomplished.

Here is how the above section is interpreted:

In the 4th iteration, the following steps are performed on scan 11564:

  1. D(128) 1/f filtering (on 128 frame timescale)
  2. C245 removing the correlated noise from the array with
  3. 245 pixels surviving the gain range criterion.
  4. W(8.44E0-2) Deriving pixel weights with the `rms` method. The
  5. average sensitivity of the array is estimated to be
  6. 84.4 mJy sqrt(s).
  7. dN(0)245 Despiking via the `neighbours` method, with 0 frames
  8. flagged as spiky, and 245 pixels surviving the
  9. flagging of pixels with too many spikes.
  10. B245 Decorrelating on amplifier boxes, with 245 pixels
  11. having acceptable gains to the correlated signals.
  12. c245 Decorrelating on electronic cables, with 245 pixels
  13. showing acceptable gain response to these signals.
  14. Map(8.44E-2) Mapping scan, with estimated point source NEFD of
  15. 84.4 mJy sqry(s).

Then, at the end of the iteration a composite source model is created. This
is further processed as:

  1. {check} Remove invalid data from the scan maps (before adding
  2. to composite).
  3. (smooth) smooth composite by the desired amount.
  4. (clip:4.0) Discard map points below an S/N of 4.0 from composite.
  5. (blank:10.0) Do not use bolometer data in other steps, which are
  6. collected over the map points with S/N > 10, thus
  7. reducing the bias that bright sources can cause in the
  8. reduction steps.
  9. (sync) Synchronize the composite model back to the time-stream
  10. data.

Once the reduction is done, various files, including the source map, are
written. This is appropriately echoed on the console output.

Console Output Reference Guide

Below is a reference guide for interpreting the console output for your
particular reduction:

Quantities inside the brackets:
  1. [nnnnn|m]: processing scan nnnnn, subscan m from the list
  2. (#.##E#) The effective point-source NEFD (usually after
  3. weighting or at the mapping step)
  4. shown in the effective mapping units times sqrt(s).
  5. [] bracketed models are solved via median estimators
  6. (#) Indicates the time resolution of the given step as
  7. the number of downsampled frames, when this is not
  8. the default full resolution.
Reduction-step Shorthands (alphabetically)
  1. a Decorrelating amplifier boards (e.g. LABOCA,GISMO).
  2. am Remove correlated telescope acceleration (magnitude)
  3. ax Remove correlated telescope x acceleration
  4. ay Remove correlated telescope y acceleration
  5. B Decorrelating electronic boxes (e.g. LABOCA).
  6. C Solving for correlated noise and pixel gains
  7. The time resolution (in frames) is shown in brackets
  8. if not full resolution, followed by the number of
  9. unflagged pixels if gains are solved and a gain range
  10. is defined for flagging.
  11. c Decorrelating electronic cables (e.g. LABOCA)
  12. or geometric columns (e.g. GISMO)
  13. cx Remove correlated chopper x position
  14. cy Remove correlated chopper y position
  15. D Solving for pixel drifts (1/f filtering).
  16. dA(#) despiking absolute deviations.
  17. In brackets it shows the percentage # of frames
  18. flagged in the data as spiky by any method.
  19. dG(#) like above but proceeds gradually.
  20. dN(#) despiking using neighbouring samples in the timeseries.
  21. dM(#) despiking at multiple resolutions (up to the 1/f
  22. filter timescale).
  23. E Remove correlated SAE error signal (GISMO)
  24. G Estimating atmospheric gradients accross the FOV.
  25. J De-jumping frames (e.g. for GISMO). Followed by two
  26. colon (:) separated numbers: the first, the number of
  27. de-jumped blocks that were re-levelled and kept; and
  28. the number of blocks flagged.
  29. m Decorrelating on multiplexers (e.g. GISMO, SCUBA-2)
  30. Mf Filtering principal telescope motion.
  31. O Solving for pixel offsets
  32. p Decorrelating on (virtual) amplifier pins (e.g. GISMO)
  33. Q Decorrelating wafers (e.g. ASZCA).
  34. r Decorrelating on geometric rows (e.g. SHARC-2, GISMO)
  35. t Solving for the twisting of band cables (e.g. LABOCA).
  36. tx Remove correlated telescope x position
  37. ty Remove correlated telescope y position
  38. tW Estimating time weights.
  39. W Estimating pixel weights.
  40. w Estimating pixel weights using the 'differential'
  41. method.
  42. wh(x.xx) Noise whitening. The average source throughput factor
  43. from the whitening is shown.
Source model-specific:
  1. Map Calculating source map from scan. The effective point
  2. source NEV/NEFD of the instrument is shown in the
  3. brackets (e.g. as Jy sqrt(s)).
  4. [C1\~#.##] Filtering corrections are applied directly and are
  5. #.## on average.
  6. [C2=#.##] Faint structures are corrected for filtering after
  7. the mapping via an average correction factor #.##.
  8. [Source] Solving for source map.
  9. {check} Discarding invalid map data.
  10. {despike} despiking scan maps before adding to composite.
  11. {filter} Spatial large-scale structure (LSS) filtering of the
  12. scan maps.
  13. (level) Zero levelling to map median.
  14. (smooth) Smoothing map.
  15. (filter) Spatial filtering of the composite map.
  16. (noiseclip) Clip out the excessively noisy parts of the map.
  17. (filter) Filtering large scale structures (i.e. sky-noise).
  18. (exposureclip) Clipping under-exposed map points.
  19. (blank) Blank excessively bright points from noise models.
  20. (sync) Removing the source model from the time-stream.

1.9. Examples

Reduce scans 12065-12069 and 12072 with zenith tau of 0.18:

  1. > crush laboca -tau=0.18 12065-12069 12072

Reduce scans 10562 and 10645 together, with the first scan observed at a
zenith tau of 0.21, and the second at tau of 0.35 with.

  1. > crush laboca -tau=0.21 10562 -tau=0.35 10645

Say you realize that the pointing was off by -5.4” in AZ, and 2.4” in EL
for the second scan. Then:

  1. > crush laboca -tau=0.21 10562 -tau=0.35 -pointing=-5.4,2.4 10645

Say scan 10049 is a scan of a bright source (e.g. Saturn) and the default
reduction ends up clipping much of it. Then,

  1. > crush laboca -bright 10049

If the source still gets nipped from the resulting map, you can try
disabling pixel weigting altogether (this is the likely culprit) by:

  1. > crush laboca -bright -blacklist=weighting 10049

Perhaps you suspect there is missing extended emission (large-scale
structure) in scan 10550. You can try to recover more by:

  1. > crush laboca -extended 10550.

Try reduce some faint source (scan 10057):

  1. > crush laboca -faint 10057

Maybe your faint source has extended structure that you want to try recover
better, and willing to pay some noise penalty for it. Then try:

  1. > crush laboca -faint -extended 10057.

You can also fine-tune, what is the largest source-scale (more-or-less)
that you are interested in. Then the reduction will adjust its parameters
accordingly. Say you expect your source to be a blob around 1’ in diameter,
then you can try:

  1. > crush laboca -faint -sourcesize=60.0 10057.

You can also play with the blanking clipping methods above, if there is
annoying negative dips remaining around your brighter peaks. Note, that
you porbably want to stick with -extended, or else such dips may be the
result of the aggressive filtering settings applied to your specific
source size (via sourcesize).

As mentioned before, command-line options and scripting are equivalent
ways of configuring the reduction. Thus, the running the script test.cfg:

  1. faint
  2. extended
  3. tau 0.18
  4. pointing -3.2,4.8
  5. read 12065-12069
  6. pointing 2.3,-0.5
  7. read 12072
  8. blank 10.0
  9. clip 3.0
  10. iteration.[last-1] forget clip

via

  1. > crush laboca -config=test.cfg

is equivalent to the command line:

  1. > crush laboca -faint -extended -tau=0.18 \
  2. -pointing=-3.2,4.8 12065-12069 \
  3. -pointing=2.3,-0.5 12072 \
  4. -blank=10.0 -clip=3.0 -iteration.[last-1]forget:clip

Finally, suppose you want to reduce a dataset on a very faint point source,
such as a distance galaxy, or a faint core, with a set of scans:

  1. > crush laboca -deep 23165-23169

And, perhaps you want to comprate the result to a random jackknife, which
you can obtain with:

  1. > crush labova -deep -jackknife 23165-23169

2. Advanced Topics

2.1. Pointing Corrections

Reducing the data with the correct pointing can be crucial (esp. when
attempting to detect/calibrate faint point sources). At times the pointing
may be badly determined at the time of observation. Luckily, getting the
exact pointing offset wrong at the time of the observation has no serious
consequences as long as the source still falls on the array, and as long as
the exact pointing can be determined later. If you, at the time of the data
reduction, have a better guess of what the pointing was at the time the
data was taken (e.g. by fitting a model to the night’s pointing data), you
can specify the pointing offsets that you believe were more characteristic
to the data, by using the -pointing option before listing the scans
that should be reduced with the given corrections. E.g.,

  1. > crush [...] -pointing=12.5,-7.3 <scans>

Will instruct minicrush that the true pointing was at dAZ=12.5 and
dEL=-7.3 arcsec each (i.e. it works just like pcorr on APECS).

Some instruments, like SHARC-2, may allow specifying the aggregated pointing
offsets (e.g. fazo and fzao) instead of the differential corrections
supplied by pointing.

Obtaining Corrections

Good practice demands that you regularly observe pointing/calibration
sources near your science targets, from which you can derive appropriate
corrections. CRUSH provides the means for a analyzing pointing/calibration
data effectively, using the point option:

  1. > crush [...] -point [...]

At the end of the reduction, CRUSH will analyze the map, and suggest
appropriate pointing corrections (to use with -pointing, or other
instrument-specific options), and provide calibration information as well
as some basic measures of source extent and shape.

After Reduction (a poor alternative)

You can also make pointing changes after the reduction (alas, now in
RA/DEC). You can read off the apparent source position from each separately
reduced scan (e.g. by using show and placing the cursor above the apparent
source center/peak). Then you can use imagetool to adjust the pointing.
E.g.:

  1. > imagetool [...] -origin=3.0,-4.5 ...

The above moves the map origin by 3” in RA and -4.5” in DEC.

Then, other crush tools (like coadd, imagetool etc.) will use these images
with the proper alignment. Clearly, this method of adjusting the pointing is
only practical if your source is clearly detected in the map.

2.2. Pixelization and Smoothing

There seem to be many misconceptions about the correct choice of
pixelization and about the (mal)effects of smoothing. This section aims to
offer a balanced account on choosing an appropriate mapping grid and on the
pros and cons of applying smoothing to your maps.

Pixelization (i.e. choosing the ‘right’ grid)

Misconception: You should always Nyquist sample your map, with 2
pixels per beam FWHM, to ensure ideal representation.

There is more than one thing wrong with the above idea. First, there is no
‘Nyquist’ sampling of Gaussian beams. The Nyquist sampling theorem applies
only to data, which has a natural cutoff frequency (the Nyquist frequency),
above which there is no information present. Then, and only then, will
sampled data (at a frequency strictly larger[!] than twice the Nyquist
cutoff) preserve all information about the signals.

Two pixels per beam is almost certainly too few for the case of Gaussian
beams. Gaussian beams have no true frequency cutoff — the signal spreads
across the spectrum, with information present at all frequencies, even if it
is increasingly tapered at the higher frequencies. By choosing a sampling
(i.e. pixel size in your map) the information above your sampling rate will
be aliased into the sampled band, corrupting your pixelized data.

Thus, you can choose a practical cutoff frequency (i.e. pixelization) based
on the level of information loss and corruption you are willing to tolerate.
At 2.5 pixels pear FWHM, you retain ~95% of the underlying information, and
corrupt it by the remaining 5%. With more pixels per beam, you get more
accurate maps. (The 2-pixels per beam, that many understand to be the
Nyquist sampled, is certainly too few by most standards of accuracy!)

Thus, the CRUSH default is to use around 5-pixels per FWHM, representing a
compromise between completeness (~99%) with minimal corruption (<1%) and a
senseless increase of map pixels (i.e. number of model parameters).

Smoothing

Misconception: You should not smooth your maps, because it does
something unnatural to your raw image.

The first problem with this view is that there is no natural or raw image
to start with, because there is no natural pixelization (see above).
Secondly, by choosing a map pixel size, you effectively apply smoothing —
only that your smoothing kernel is a square pixel, with an abrupt edge,
rather than a controlled (and directionless) taper of choice.

(To convice yourself that a map pixels apply smoothing, consider the mapping
process: each pixel will represent an average of the emission from the area
it covers. Thus, the pixel values are essentially samples of a moving
integral under the pixel shape — i.e. samples of the emission convolved
[smoothed] by the pixel shape.)

However, smoothing (including coarse pixelization!) does have a real
downside, in that it degrades the effective resolution of the image. Consider
smoothing by Gaussian kernels. Then, the image resolution (imageFWHM)
increases with the smoothing width (smoothFWHM) as

  1. imageFWHM^2 = beamFWHM^2 + smoothFWHM^2

from the instrument resolution (beamFWHM). However, you can smooth a fair
bit before the degradation of resolution becomes a problem or even really
noticeable. If you use sub-beam smoothing with smoothFWHM < beamFWHM, then
the relative widening is roughly

  1. dW / W0 ~ 0.5 * (smoothFWHM / beamFWHM)^2

Thus, smoothing by half a beam, degrades resolution by ~12% only… At the
same time, smoothing can bring many benefits:

  • Improves image appearance and S/N by rejecting pixelization noise. It is
    better to use a finer grid and smooth the image by the desired amount,
    than to pixelize coarsely — not only for visual appearance but also
    for the preservation of information.

  • Smoothing can help avoid pathological solutions during the iterations,
    by linking nearby pixel values together.

  • Beam-smoothing is the optimal (Wiener) filtering for determining point
    source fluxes in a white noise background. I.e., smoothing by the
    beam produces the highest signal-to-noise measurement of point sources.

  • Beam-smoothing is mathematically equivalent to fitting fixed-width beams
    at every pixel position. The beam-smoothed flux value in each pixel
    represents the amplitude of a beam fitted at that position using
    chi-squared minimization. Thus, beam smoothed images are the bread-and-
    butter of point source extraction (e.g. the detect tool of CRUSH).

    Given the pros and the con for smoothing, the different reduction modes
    (default, faint or deep) of CRUSH make different compromises. The
    default reduction aims to provide maps with maximal resolution (i.e. no
    smoothing), although a some smoothing is used during the iterations to aid
    convergence to a robust solution. faint mode reductions smooth by 2/3
    beams (resulting in ~22% degradation of resolution), to provide better
    signal-to-noise at a moderate loss of spatial resolution.
    Finally, deep reductions yield beam smoothed images, which are ideal for
    point source extraction, even if some spatial details are sacrificed.

    You can change/override these default settings. The smoothing during
    reduction is controlled by the smooth setting, which can take both a
    Gaussian FWHM (in arcsec) as its arguments as well as the preset values
    minimal, halfbeam, 2/3beam and beam. The smoothing of the final map
    can be controlled by ‘final:smooth’ (the final stands as a shorthand
    conditional for the last iteration). Thus,

    crush […] -smooth=halfbeam -final:smooth=minimal […]

    smoothes by half a beam during the iterations, and only slightly for the
    output image, while

    crush […] -forget=smooth -final:forget=smooth […]

    disables smoothing both during reduction and at the end. The table below
    summarizes the effect of the preset smoothing values, indicating both the
    degradation of resolution (relative to the telescope beam) and the relative
    peak S/N on point sources:

Table 1. Smoothing properties

smooth widening rel. S/N
minimal 6% 0.33
halfbeam 12% 0.50
2/3beam 22% 0.67
beam 41% 1.00

2.3. Recovery of Extended Emission

As a general rule, ground based instruments (in the mm and submm) are only
sensitive to structures smaller than the Field-of-View (FoV) of the
instrument. All scales larger than the FoV will be strongly degenerate with
the bright correlated atmosphere (a.k.a. sky noise), and will be very
difficult, if not outright impossible, to measure accurately. In a sense,
this is analogous to the limitations of interferometric imaging, or the
diffraction-limited resolution of finite apertures.

However, there is a some room for pushing this limit, just like it is
possible to obtain some limited amount of super-resolution (beyond the
diffraction limit) using deconvolution. The limits to recovery of scales
over the FoV is similar to those of obtaining super-resolution beyond the
diffraction limit. Both these methods

  1. yield imperfect answers, because the relevant spatial scales are
    poorly measured in the first place.

  2. are only feasible for high signal-to-noise structures.

  3. can, at best, go beyond by a factor of a few beyond the fundamental
    limit.

    You can usually choose to recover more extended emission if you are willing
    to put up with more noise on those larger scales. This trade-off works
    backwards too — i.e., you can get cleaner maps if you are willing to filter
    them more.

    As a general principle, structures that are bright (>> 5-sigma) can be fully
    recovered to up to a few times the field-of-view (FOV) of the bolometer
    array. However, the fainter the structure, the more it will be affected by
    filtering.

    Generally, the fainter the reduction mode, the more filtering of faint
    structures results, and the more limited the possibility of recovering
    extended structures becomes. The table below is a rough guide of what
    maximum scales you may expect for such faint feaures, and also, how noise is
    expected to increase on the large scales as these are added in extended
    mode:

Table 2. Maximum faint structure scales for S/N <~ 5

option Max. sensitive scale Noise power scaling
default / bright FOV/2 ~L^2
+ extended FOV ~L^2
faint / deep 2*sourceSize or 2*beam ~L
+ extended FOV/2 ~L

Iterating longer, will generally help recover more of the not-too-faint
large-scale emission (along with the large-scale noise!). Thus,

  1. > crush -extended -rounds=50 [...]

will generally recover more extended emission than just the default

  1. > crush -extended [...]

(which iterates 10 times). In general, you may expect to recover significant
(>5 sigma) emission up to scales L as:

  1. L ~= FoV + sqrt(N v T)

in terms the number of iteration N, limiting Field-of-View (FoV), scanning
velocity v and correlated noise stability time-scale T. Unfortunately, the
correlated noise stability of most ground-based instruments is on the order
of a second or less due to a highly variable atmosphere. At the same time,
the noise rms on the large scales will increase assymptotically as:

  1. rms ~ sqrt(N)

for large numbers of iterations.

2.4. Filtering corrections, transfer functions, etc.

Most reduction steps (decorrelation, 1/f drift removal, spectral filtering)
will impact different spatial frequencies of the source structure
differently. In general, the low spatial frequency components are most
affected (suppressed). E.g. decorrelating the full array will reject
scales larger than the field-of-view, while 1/f filtering will result in
1/f spatial filtering of the map due to the scanning motion, which more
or less directly maps temporal frequencies into spatial frequencies.

The approach of CRUSH is to apply appropriate point-source corrections
(rescaling) such that point sources in the map yield the same fluxes no
matter how (i.e. with what options exactly) the source was reduced.
While the corrections will be readily sufficient for a large fraction of
the science cases in this simple form, there are two notable exceptions
to this rule: (i) extended emission, and (ii) small, fast maps reduced
in deep mode (when the source is scanned over the same detector more than
once over the stability timescale).

The better approach for these cases is to measure a transfer function, or
otherwise check the reduction of a similar simulated source.

Transfer functions and simulated sources

The sources option provides a means for inserting test sources into
CRUSH maps, while one of the jackknife options can be used to remove
any real emission beforehand but retaining the signal and noise structure
of the data otherwise (which is important in order to get a representative
measure of the transfer function). E.g.

  1. > crush [...] -jackknife.alternate -sources=test.mask ...

will apply an alternating jackknife to the input scans, and insert sources
specified in the mask file test.mask (See example.mask on the format
of mask files, and the GLOSSARY for more on jackknifing options).

To measure transfer functions (i.e. complete spatial characterization of
the point-source response) you would want to insert a single beam-sized
point source. Alternatively, you can insert one or more Gaussian-shaped
source(s) with a range of FWHMs to create a simulated source structure that
resembles the structure you expect your source(s) to have.

Make sure your test source is bright enough to see with high S/N, but not
too bright to trigger unintended flagging, or despiking. In general, a S/N
between 100 and 1000 should be fine for default reductions, and 100 to 300
for faint or deep modes.

Additionally, in faint or deep modes, you may want to disable some
features which may affect your relatively bright test sources differently
than your much fainter real science target(s). Thus, the following are
recommended for reducing faint and deep test sources:

  1. > crush [...] -blacklist=clip,blank,source.filter.blank

Note also that the spatial filtering (transfer function) will be varying
with location on the map (since the scanning speed and directions will
themselves be non-uniform over the map). Therefore, it is strongly
recommended that test sources are inserted near the same locations as the
real sources in the field.

2.5. Image processing post-reduction

CRUSH also provides imagetool for post-processing images after reduction.
The typical usage of imagetool is:

  1. > imagetool -name=<output.fits> [options] <input.fits>

which processes <input.fits> with the given options, and writes the result
to <output.fits>.

With imagetool, you can apply the desired level of smoothing (-smooth), or
spatial filtering (-extFilter), specify circular regions to be skipped by
the filter (-mask=<filename>). You can adjust the clipping of noisy map
edges by relative exposure (-minExp) or by relative noise (-maxNoise).
You can also crop a rectangular region (-crop=dx1,dy1,dx2,dy2). There are
many more image operations. See the imagetool manual (in your UNIX shell, or
online) or simply run imagetool without an argument:

  1. > imagetool

One of the useful options allows to toggle between the noise estimate from
the detector time-streams (-noise=data) and from the image itself
(-noise=image) using a robust estimator. For example, after spatial
filtering, you probably want to re-estimate the map noise:

  1. > imagetool [...] -extFilter=45.0 -noise=image <input.fits>

The built-in image display tool show also takes all processing options
of imagetool, but rather than writing the result, it will display it in
an interactive window. See the manual page of show (either inside your
UNIX shell, or online), or run show without an argument.

2.6. Reducing Very Large Datasets

Coming soon…


3. Advanced Configuration

In this section, you can find information on the most useful configuration
options. A complete list of available settings can be found in the
text file GLOSSARY, located in the CRUSH installation directory.

Configuration options here are listed as scripting keys i.e., without a
preceding dash. However, you can use the same options in the command line by
adding the dash. (Also, = can be replaced by space(s) in scripting…)

3.1. Commands

There are a handful of keywords that are treated as commands by crush. (The
difference being that commands are interpreted and acted on right away, whereas
typical configuration keys are stored settings that are interpreted later as
necessary.) The commands are:

  1. config=<filename> Load the configuration file filename.
  2. The file is looked for in the locations in the
  3. following order:
  4. 1. ./
  5. 2. ./<instrument>/
  6. 3. ~/.crush2/
  7. 4. ~/.crush2/<instrument>/
  8. Whenever a matching file is found its contents
  9. are parsed. Because of the ordering, it is
  10. convenient to create overriding configurations.
  11. Thus instrument specific settings can be used
  12. to override default settings, and user specific
  13. settings placed in '~/.crush2' can override
  14. shipped defaults. Whenever a configuration is
  15. parsed, there is a note of it on the console
  16. output so that one always knows which files
  17. were read and in what order.
  18. E.g. when using
  19. > crush laboca -faint 12066
  20. the following configuration files will be
  21. loaded in order (provided they exist):
  22. <crush>/config/default.cfg
  23. <crush>/config/laboca/default.cfg
  24. ~/.crush2/default.cfg
  25. ~/.crush2/laboca/default.cfg
  26. <crush>/config/faint.cfg
  27. <crush>/config/laboca/faint.cfg
  28. ~/.crush2/faint.cfg
  29. ~/.crush2/laboca/faint.cfg
  30. Each successively loaded file may override
  31. the options set before it.
  32. When a matching configuration file is not found
  33. in any of the standard locations (above), CRUSH
  34. will make one last attempt to interpret the
  35. argument as a regular pathname. This allows
  36. users to store and invoke configuration files
  37. anywhere on the filesystem.
  38. forget=<option>... Forget the priorly set values for <option>
  39. as if it were never defined. E.g.
  40. forget=outpath
  41. will unset the 'outpath' option.
  42. You can specify more than one options as a
  43. comma-separated list. E.g.
  44. forget=outpath,project
  45. With unset both the 'outpath' and 'project'
  46. options.
  47. Additionally, the arguments 'conditions' and
  48. 'blacklist' can be used to clear the
  49. conditional or blacklisted settings
  50. respectively
  51. recall=<option> Undoes 'forget', and reinstates the <option>
  52. to its old value.
  53. remove=<option> Similar to 'forget', but removes the entire
  54. branch. Thus '-remove=despike' unsets:
  55. despike
  56. despike.level
  57. despike.method
  58. despike.flagfraction
  59. ...
  60. replace=<option> Undoes the 'remove' option, reinstating the
  61. <option> tree to its prior state.
  62. blacklist=<option>... Similar to 'forget', except it will not
  63. set options even if they are then specified
  64. at a later time. This is useful for altogether
  65. removing settings from the configuration.
  66. whitelist=<option>... Remove <option> from the blacklist, allowing
  67. it to be set again if desired. Whitelisting
  68. an option will not reinstate it to its prior
  69. value. After whitelisting, you must explicitly
  70. set it again, or 'recall' or 'replace' it
  71. to its prior state.
  72. poll Whenever unsure what options are set at any
  73. poll=<option> given stage, you can poll the settings.
  74. Without an additional argument it will list
  75. all options to the standard output. When
  76. an argument is specified it will list
  77. all configuration settings that start with
  78. the specified string. E.g.
  79. > crush [...] -poll=iter
  80. will list all iteration based options that
  81. are set including all the [...] options set
  82. prior to '-poll' in the command line.
  83. list.divisions List all pixels divisions, which can be
  84. decorrelated for the instrument.
  85. list.response List all response modalities for the
  86. instrument (to known signals, such as
  87. telescope movement, or temperature data).

3.2. Pipeline Configuration

a. Source types.

The default reduction (see default.cfg) is optimized
for mapping compact (up to the field-of-view or smaller) sources in the S/N
range of ~10-1000. These options are useful if your source does not match
these criteria.

  1. bright Use for bright sources (S/N > ~1000). This setting
  2. entirely bypasses all filtering to produce a very
  3. faithful map. The drawback is more noise, but
  4. that should not be an issue for such a bright guy :-)
  5. Will invoke 'bright.cfg'.
  6. faint Use with faint sources (S/N < ~30) when the
  7. source is faint but still visible in a single scan.
  8. This setting applies some more aggressive filtering of
  9. the timestreams, and extended structures. It invokes
  10. 'faint.cfg'.
  11. deep Use for very faint sources which are not at all
  12. detected in single scans, or if you think
  13. there is too much residual noise (baselines) in your
  14. map to your liking. This setting results in the most
  15. agressive filtering. Will load the configuration from
  16. 'deep.cfg'. The output map is optimally filtered
  17. (smoothed) for point sources.
  18. extended Try to better preserve extended structures. This
  19. setting can be used alone or in combination with
  20. the above brightness options. See also '-sourcesize=X'
  21. below. With the fainter settings the recovery of
  22. extended structures becomes increasingly more
  23. difficult. For bright structures recovery up to FOV
  24. (or beyond!) should be possible, while for faint
  25. structures \~1/4 FOV - FOV scales are maximally
  26. obtainable (see more on this in the section below.)
  27. source.type=<type> By default, crush will try to make a map from
  28. the data. However, some istruments may take
  29. data that is analyzed differently. For example, you
  30. may want to use crush to reduce pixel maps (to
  31. determine the positions of your pixels on sky), or
  32. skydips (to derive appropriate opacities), or do
  33. point source photometry. Presently, the following
  34. source types are supported accross the board:
  35. map Make a map of the source (default)
  36. skydip Reduced skydips, and determine
  37. opacities by fitting a model to it.
  38. pixelmap Create individual maps for every
  39. pixel, and use it to determine their
  40. location in the field of view.
  41. Note, that you may also just use 'skydip' and
  42. 'pixelmap' shorthands to the same effect. E.g.
  43. > crush [...] -skydip [...]
  44. sourcesize=X This option can be used instead of 'extended' in
  45. conjunction with 'faint' or 'deep' to specify the
  46. typical size of sources (FWHM in arcsec) that are
  47. expected. The reduction then allows filtering
  48. structures that are much larger than the specified
  49. source-size...
  50. If 'sourcesize' or 'extended' is not specified, then
  51. point-like compact sources are assumed.

b. Other commonly used pipeline settings:

(in typical order of importance to users…)

  1. outpath=<path> Set the directory into which output files (e.g. maps)
  2. will be written. Can use '\~' for home directory and
  3. environment variables in {}'s. Thus,
  4. outpath=\~/images
  5. and
  6. outpath={$HOME}/images
  7. are equivalent
  8. rounds=N Iterate N times. You may want to increase the number
  9. of default iterations either to recover more extended
  10. emission (e.g. when 'extended' is set), or to go
  11. deeper (esp. when the 'faint' or 'deep' options are
  12. used).
  13. iteration.[N] Use as a condition to delay settings until the Nth
  14. iteration. E.g.
  15. iteration.[3] smooth halfbeam
  16. or
  17. > crush [...] -iteration.[3]smooth=halfbeam [...]
  18. to specify half-beam smoothing starting from the 3rd
  19. iteration.
  20. iteration.[last] Specify settings that should be interpreted only
  21. at the beginning of the last iteration.
  22. final:key=value A shorthand for the above :-).
  23. iteration.[last-N] Activate settings N iterations before the last
  24. one.
  25. iteration.[xx%] Activate settings as a percentage of the total
  26. number of iterations (as set by 'rounds'). E.g.
  27. iteration.[50%] forget clip
  28. can be used to disable the S/N clipping of the
  29. source map half way through the reduction.
  30. Because of the flexible syntax, the same iteration
  31. can be referred to in different ways. Consider
  32. a reduction with 10 rounds. Then,
  33. iteration.[5] smooth 5.0
  34. iteration.[50%] smooth 10.0
  35. iteration.[last-5] smooth beam
  36. can all be used to define what happens in the 5th
  37. iteration. CRUSH will parse these conditionals in
  38. the above order: first the explicit iteration settings
  39. then those relative to the reduction length, and
  40. finally the settings relative to the end of the
  41. reduction. Thus, in the above example the beam
  42. smoothing will always override the other two settings.
  43. idle=N Instruct crush NOT to use N number of CPUs of the
  44. machine. By default crush will try to use all
  45. processors in your machine for maximum performance.
  46. This option allows to modify this behavior according
  47. to need. Note, that at least 1 CPU will always be used
  48. by crush, independent of this setting.
  49. The number of actual parallel threads will be the
  50. smaller of the allowed number of CPUs and the number
  51. of scans processed.

3.3. Source (Map) Configuration

  1. altaz Shorthand for 'system=horizontal' to reduce in Alt/Az.
  2. It is also aliased to 'horizontal'.
  3. grid=X set the map pixelization to X arcsec. Pixelization
  4. smaller than 2/5 beam is recommended. The default is
  5. \~1/5 beam pixelization.
  6. name= Specify the output image file name, relative to the
  7. directory specified by 'outpath'. When not given
  8. minicrush will chose a file name based on the source
  9. name and scan number(s), which is either
  10. <sourcename>.<scanno>.fits
  11. or
  12. <sourcename>.<firstscan>-<lastscan>.fits
  13. For mapping. Other source model types (e.g. skydips
  14. or pixel maps) may have different default naming
  15. conventions.
  16. pixelmap Reduce pixel map data. Instead of making a single map
  17. from all pixels, separate maps are created for each
  18. pixel (Note, this can chew up some memory if you have
  19. a lot of pixels). At the end of the reduction CRUSH
  20. determines the actual pixel offsets in the focal plane.
  21. This option is equivalent to 'source.type=pixelmap'.
  22. pixelmap.writemaps Pixel maps (above) normally only produce the
  23. pixel position information. Use this option
  24. if you want CRUSH to write individual pixel maps as
  25. well, so you can peek at these yourself.
  26. projection= Choose a map projection to use. The following
  27. projections are supported:
  28. SFL -- Sanson-Flamsteed
  29. SIN -- Slant Orthographic
  30. TAN -- Gnomonic
  31. ZEA -- Zenithal Equal Area
  32. SFL -- Sanson-Flamsteed
  33. MER -- Mercator
  34. CAR -- Plate-Carree
  35. AIT -- Hammer-Aitoff
  36. GLS -- Global Sinusoidal
  37. STG -- Stereographic
  38. ARC -- Zenithal Equidistant
  39. skydip Reduce skydip data, instead of trying to make an
  40. impossibly large map out of it :-). This option is
  41. equivalent to specifying 'source.type=skydip'.
  42. smooth=X Smooth the map by X arcsec FWHM beam. Smoothing
  43. helps improve visual appearance, but is also useful
  44. during reduction to create more redundancy in the data
  45. in the intermediate reduction steps. Also, smoothing
  46. by the beam is optimal for point source extaction from
  47. deep fields. Therefore, beam smoothing is default in
  48. with the 'deep' option (see 'deep.cfg').
  49. Typically you want to use some smoothing during
  50. reduction, and you may want to turn it off in the
  51. final map. Thus, you may have something like:
  52. smooth=9.0 # 9" smoothing at first
  53. iteration.[2]smooth=12.0 # smooth more later
  54. iteration.[last]forget=smooth # no smoothing at last
  55. Other than specifying explicit values, you can use
  56. the predefined values: 'minimal', 'halfbeam', '2/3beam'
  57. or 'beam'. See more on smoothing in the advanced
  58. configuration section.
  59. source.filter Filter extended structures. By default the filter will
  60. skip over map pixels that are above the 'blanking' S/N
  61. level (>6 by default). Thus any structure above this
  62. significance level will remain unfiltered.
  63. Filtering is useful to get deeper in the map when
  64. retaining the very faint extended structures is not
  65. an issue. Thus filtering above 5 times the source size
  66. (see 'sourcesize') is default when the filter is used.
  67. See the advanced configuration section for further
  68. details on fine tuning the large-scale structure
  69. filter.
  70. source.fixedgains Specifies to use the fixed source gains
  71. (e.g. from an RCP file -- see 'rcp' key).
  72. Normally, crush calculates source gains based on the
  73. correlated noise response and the specified point
  74. source couplings (e.g. as derived from the two gain
  75. columns of RCP files.)
  76. system=<type> Select the coordinate system for mapping. The default
  77. is 'equatorial'. Other possibilities are 'horizontal'
  78. 'ecliptic', 'galactic' or 'supergalactic'. Each of
  79. these values is additionally aliased to simple keys.
  80. Thus, you may use:
  81. > crush -galactic [...]
  82. as a shorthand for '-system=galactic'.
  83. unit=xxx Set the output to units xxx. You can use either the
  84. instrumental units (e.g. 'V/beam' or 'counts/beam') or
  85. the more typical 'Jy/beam' (default), as well as their
  86. common multiples (e.g. 'uJy/beam', or 'nV/beam').

3.4. Scan Configuration

Some options relate to the scans, helping to configure and handle them
These options are specified before the list or range of scans to
which they apply, and remain valid for all scans read after, or until
an overriding option is placed. E.g.

  1. > ./crush -option1=x 10218-10532 12425 -option2=y 11411 \
  2. -option1=z 10496

will set option1 to ‘x’ for all scans but the last one, which will have
this option set to ‘z’. And the last two scans will have option2
set to ‘y’.

A detailed listing of all scan specific options can be found in the
‘GLOSSARY’. Here are a few of the most commonly used ones (in alphabetical
order).

  1. pointing=dx,dy Specify incremental pointing offsets x,y in the
  2. system of the telescope mount (I.e., azimuth and
  3. elevation for horizontal mounts. (Note, this option
  4. works like pcorr in APECS).
  5. datapath=<dir> Start looking for raw data in directory <dir>. Some
  6. instruments may also interpret it as a root directory
  7. in which data may reside some specific hierarchy. E.g.
  8. in <dir>/<project> for APEX bolometers. Thus, if an
  9. APEX instrument defines:
  10. datapath /homes/data
  11. project T-79.F-0002-2007
  12. then crush will try to find data first in
  13. '/homes/data', then in '/homes/data/T-79.F-0002-2007'
  14. This provides a convenient way for accessing
  15. hierarchically stored data. See the instrument-
  16. specific usage of 'datapath' in the GLOSSARY.
  17. jackknife Jackkniving is a useful technique to produce accurate
  18. noise maps from large datasets. When the option is
  19. used the scan signals are randomly inverted, s.t. the
  20. source signals in large datasets will tend to cancel
  21. out, leaving one with pure noise maps.
  22. project=<id> Some instruments (e.g. APEX bolometers) may require
  23. a project ID to be set in order to locate scans by
  24. serial number. Use capitalized form when defining
  25. APEX projects. E.g.,
  26. project T-79.F-0002-2007
  27. read <filename> Read the scan data from <filename>, which can be
  28. either a fully specified path, or relative to
  29. 'datapath'. (On the command line it is sufficient
  30. to list the filename without a preceding dash.)
  31. read scanNo Read scan number scanNo (scripting only).
  32. in command line mode, ommit '-read='.
  33. read from-to Read the range of scan numbers between from and to.
  34. (inclusive). On the command line, you can omit
  35. '-read=' and simply list the scan range.
  36. read X Y [...] You can combine scan numbers and different ranges
  37. in a space-spearated list...
  38. read... As mentioned, the 'read' keys only apply to scripts.
  39. Thus,
  40. read 10755-10762 # in script
  41. and
  42. > ./crush [...] 10755-10762 # command line
  43. are equvivalent.
  44. scale=X Scale the fluxes by X. With this option you can apply
  45. calibration corrections to your data.
  46. scale=<filename> Alternatively, scan specific scaling can be defined
  47. by an appropriate calibration file, which among
  48. other things, contains the ISO time-stamp and
  49. the corresponding calibration values for each scan.
  50. The filename may contain references to environment
  51. variables enclosed in {} brackets. E.g.:
  52. scale={$HOME]}/laboca/scaling.dat
  53. scramble Another technique for generating noise maps, which
  54. can be used also for small datasets, for which
  55. jackknifing cannot be used. This approach scrambles
  56. the pixel positions, such that the source signals
  57. will be smeared out in the maps.
  58. tau=X Specify a zenith tau value to use. When not used
  59. minicrush will try to interpolate from
  60. <instrument>/<instrument>-taus.master.dat if possible
  61. or use 0.0 as default.
  62. tau=<filename> Alternatively, tau can also specify a file-name with
  63. lookup information (usually containing tau
  64. values from the radiometer or from the skydips).
  65. Tau values will be interpolated for each scan,
  66. as long as the scan falls inside the interpolator's
  67. range. Otherwise, tau of 0.0 will be used. The
  68. filename may contain references to environmnent
  69. variables enclosed in {} brackets. E.g.:
  70. tau={$HOME}/laboca/tau.dat

3.5. Instrument Configuration

There are various instrument configuration files. These reside in the
corresponding instrument subdirectories inside the main crush directory.
Some types of files are commonly used among many or all instruments, while
others may be strongly instrument specific.

In most cases the instrument configurations should be set correctly, and
you probably can leave these settings alone. However, here you will find
some of the most common instrument configuration options that you may, at
times, want to adjust to your preference.

A more complete list of the available instrument-specific configuration
options can be found in the GLOSSARY.

Generic Instrument Parameters

  1. pixeldata=<filename> Specifies a pixel data file, providing initial
  2. gains, weights and flags for the detectors,
  3. and possible other information as well depending on the
  4. specific instrument. Such files can be produced via the
  5. 'write.pixeldata' option (in addition to which you
  6. may want to specify 'forget=pixeldata' s.t. flags are
  7. determined without prior bias).
  8. rcp Specify pixel positions (and optionally point-source
  9. and sky-noise gains). These are standard IRAM
  10. or APEX RCP files containing the information in ASCII
  11. columns. RCP files can be produced by the 'pixelmap'
  12. option, from scans, which move a bright point source
  13. over all pixels.
  14. rcp.gains Use gains from the RCP files. Otherwise gains may
  15. come from the 'pixeldata' file, or assume default
  16. values, such as 'uniform'.
  17. rcp.center=x,y Define the center RCP position at x,y in arcseconds.
  18. Centering takes place immediately after the parsing
  19. of RCP data.
  20. rcp.rotate=X Rotate the RCP positions by X degrees (counter
  21. clockwise). Rotations take place after centering (if
  22. specified).
  23. rcp.zoom=X Zoom (rescale) the RCP position data by the scaling
  24. factor X. Rescaling takes place after the centering
  25. (if defined).
  26. stability=X Specify the instrument 1/f stability time-scale in
  27. seconds. This is used for determining optimal 1/f
  28. filtering of the time-streams in the reduction.

3.6. Advanced startup environment and Java configuration

We have seen before how to configure the startup environment and Java by
placing configuration snipplets inside the directory ~/.crush2/startup/.
However, what if you want certain settings to apply to just a specific
program within the crush suite (e.g. to crush or show) only, but not
to all of them? That too is possible, by (creating and) editing a file
under the corresponding ~/.crush2/startup/<progname> directory. E.g.:

  1. ~/.crush2/startup/crush/myconfig.conf

will source the bash snippled contained in myconfig.conf for reductions
(crush executable) only. As with other startup scripts, the name of the
file itself is irrelevant, and all files will be parsed inside the
program’s startup directory (albeit in unspecified order!).

Inside this file you can set up environment variables, and add shell
(bash) scripts. E.g., the crush startup configuration file may contain
the lines:

  1. CRUSH_NO_UPDATE_CHECK="1"
  2. EXTRAOPTS="$EXTRAOPTS -Djava.awt.headless=true"

(The first disables update checking, the second adds headless mode to the
list of extra Java options already defined).


4. Correlated Signals

The removal of correlated signals, atmospheric or instrumental, is really the
heart of CRUSH. In its most generic form the decorrelation is a two-step
process: the first step finds and removes the correlated signals, assuming
some initial detector gains to it, after which the gains are estimated during
the second step based on the individual detector responses to the bolometer
signals. (For details, see the CRUSH paper: Kovacs 2008, Proc. SPIE, 7020, 45.)
Flagging of non-responsive pixels, based on outlying gains (i.e. responses to
the correlated signals) may be part of the second step of the decorrelation.

4.1. Modes and Modalities

For each instrument, crush defines a set of modalities (i.e. a set of
correlated modes) on which decorrelation may be performed. Each mode in a
modality affects a group of pixels. Thus, the collection of pixel groups
with related modes constitutes a division. For example, SHARC-2 has 12
detector rows (each with 32 pixels). The dividing of pixels into 12 groups,
each representing a detector row, is a pixel division. Pixels in a given
row respond to the same correlated mode, and so there are 12 row-related
modes, which are bunched together in a modality.

Some of the modalities (and pixel divisions) are common to all (or most)
instruments. These are:

<modality name> Description
all All channels, regardless of their state
live All channels that aren’t dead.
detectors All detectors (e.g. excluding resistors etc.)
obs-channels All observing detectors (e.g. that see sky).
gradients A focal plane gradients of ‘observing’ detectors.
blinds Blind detectors (if exist)
telescope-x Telescope position in the x direction (e.g. AZ)
telescope-y Telescope poistion in the y direction (e.g. EL)
accel-<dir> Acceleration in some direction (see below).
chopper-<dir> Chopper motion in some direction (see below).

Above, the motion-related modalities may have the following directionalities:

<dir> Description
x x direction.
y y direction.
x^2 square of the x coordinate.
y^2 square of the y coordinate.
` x ` x magnitude.
` y ` y magnitude.
mag vector magnitude.
norm square of vector magnitude.

4.2. Removing Correlated Signals

The decorrelation step is invoked by:

  1. correlated.<modality-name>

For example, a decorrelation of the signals induced by the chopper
displacement in x (e.g. SHARC-2) is invoked by:

  1. correlated.chopper-x

(Of course, the same keyword must also appear in the pipeline ‘ordering’,
for the step to actually take place during the reduction process —
otherwise crush will not know when to correlated.the chopper signals.)

You can fine-tune how CRUSH deals with each correlated mode. For example,
the following lines:

  1. correlated.obs-channels.resolution 1.0
  2. correlated.obs-channels.gainrange 0.3--3.0

Specifies that atmospheric signals, which appear in all observing channels,
should be correlated. only every second (vs. the default every available
frame), and that any channel that exhibits a response outside of 0.3—3.0
times the ‘average’ response of the array, should be flagged, and ignored
in the reduction afterwards.

4.3. User-defined Modalities

In addition to the most common modalities listed above, each instruments
defines its specific ones (e.g. rows for SHARC-2 from the example above).
See the README files in the instrument sub-directories of crush for a full
list of predefined instrument specific modalities. You can also define your
own pixel groups, divisions and modalities using the group and division
keywords. Here is an example:

  1. group.my-group-1 10,13,20-25
  2. group.my-group-2 40-56,78,80-82
  3. division.my-division my-group-1 my-group-2

The first two lines defines two pixel groups (my-group-1 and my-group-2)
from pixel indexes and ranges, while the last line creates a division
(my-division) from these pixel groupings. CRUSH will also create a
correlated modality with the same name as the division. So, given the
definitions above you can correlated.on my-division by:

  1. correlated.my-division

5. Custom Logging Support

CRUSH (as of version 2.03 really) provides a poweful scan/reduction logging
capability via the log and obslog keys.

log writes the log entries for each scan after the reduction, whereas
obslog does not reduce data at all, only logs the scans immediately after
reading them. While both logging functions work identically, some of the
values are generated during reduction, and therefore may not be available to
obslog.

5.1. Log Files and Versioning

You can specify the log files to use with the log.file and obslog.file
keys (the default is to use <instrument>.log and <instrument>.obs.log in
the outpath directory). Equivalently, you can also set the filename
directly as an optional argument to log and obslog:

  1. > crush [...] -log=myLog.log [...]

You can specify the quantities, and the formatting in which they will appear,
using the log.format and obslog.format keys. Below you will find more
information on how to set the number formatting of quantities and a list
of values available for the logging.

A log file always has a fixed format, the one which was used when creating
it. Therefore, a conflict may arise if the same log file is specified for
use with a different format. The policy for resolving such conflicts can be
set via the log.conflict and obslog.conflict keys, which can have one
of the following values:

  1. overwrite Overwrites the previous log file, with a new one in the
  2. newly specified format
  3. version Tries to find a sub-version of the same log file (with .1,
  4. .2, .3 ... extension added) in the new format, or create
  5. the next available sub-version.

The default behaviour is to assume versioning, in order to preserve
information in case of conflicts.

5.2. Decimal Formatting of Values

Many of the quantities you can log are floating point values, and you have
the possibility of controlling how these will appear in your log files.
Simply put one of the formatting directives in brackets after the value
to specifts its format.

E.g. the keys RA or RAh will write the right-ascention coordinate either
as radians or as hours, with the default floating point formats. However,
RA(h:2) will write the value in human-readable hh:mm:ss.ss format,
whereas RAh(f3) will express it as hh.hhh.

You can choose from the following formats to express various quantities.
Be careful, because not all formats are appropriate to all types of data.
(For example, you should not try to format angles expressed in degrees with
the DMS formatting capabilities of the a angle formats. Use these only
with angles expressed in radians!)

  1. d0...d9 Integer format with 0...9 digits. E.g. 'd3' will write
  2. Pi (3.1415...) as 003.
  3. e0...e9 Exponential format with 0...9 decimals. E.g. e4 will
  4. write Pi as 3.1415E0.
  5. f0...f9 Floating point format with 0...9 decimals. E.g. f3 will
  6. write Pi as 3.141.
  7. s0...s9 Human readable format with 0...9 significant figures
  8. (floating point or exponential format,
  9. whichever is more compact).
  10. a:0...a:3
  11. as0...as3
  12. al0...al3 Angle format with 0...3 decimals on the seconds. E.g.
  13. 'a:1' produces angles in ddd:mm:ss.s format. Use only
  14. with values expressed as radians (not in degrees!).
  15. As such, 'a:1' will format Pi as '180:00:00.0'. The
  16. difference between the 'a:', 'as', and 'al' formats is
  17. the separators used between degrees, minutes and seconds
  18. (colons, symbols, and letters respectively).
  19. h:0...h:3
  20. hs0...hs3
  21. hl0...hl3 Hour-angle format (e.g. for RA coordinate) with 0...3
  22. decimals on the seconds. E.g. 'h:2' formats angles in
  23. 'hh:mm:ss.ss' format. Use only with values expressed
  24. as radians (not in degrees!). As such 'h:2' will format
  25. Pi as '12:00:00.0'. The difference between the 'h:',
  26. 'hs', and 'hl' formats is the separators used between
  27. hours, minutes and seconds (colons, symbols, and
  28. letters respectively).
  29. E.g. Pi will be:
  30. h:1 12:00:00.0
  31. hs1 12h00'00.0"
  32. hl1 12h00m00.0s
  33. t:0...t:3
  34. ts0...ts3
  35. tf0...tf3 Time format with 0...3 decimals on the seconds. E.g.
  36. 't:1' formats time in 'hh:mm:ss.s' format. Use only
  37. on time values expressed in seconds! The difference
  38. between the 't:', 'ts', and 'tl' formats is the
  39. separators used between hours, minutes and seconds
  40. (colons, symbols, and letters respectively).

5.3. Loggable Quantities

Currently, CRUSH offers the following quantities for logging. Directives
starting with ‘?’ will log the values of configuration keys. Other
quantitites reflect the various internals of the scan or instrument state.
More quantities will be added to this list in the future, especially
values that are specific to certain instruments only. Keep an eye out for
changes/addititions :-).

  1. ?<keyword> The value of the configuration <keyword>. If the
  2. configuration option is not defined '---' is written.
  3. If the keyword is set without a value, then '<true>'
  4. is written.
  5. AZ Azymuth coordinate (in radians). E.g. 'AZ(a:1)'
  6. produces ddd:mm:ss.s formatted output. See also 'AZd'
  7. and 'EL'.
  8. AZd Azymuth coordinate in degrees. E.g. 'AZd(f2)'
  9. produces output in ddd.dd format. See also 'AZ' and
  10. 'ELd'
  11. channels Number of channels processed in the reduction. See
  12. also 'okchannels' and 'maxchannels'.
  13. chopeff Chopper efficiency.
  14. chopfreq Chopper frequency (in Hz).
  15. chopthrow Chopper throw (2x amplitude).
  16. creator Software that created the data file (as stored by
  17. the FITS CREATOR keyword).
  18. date Date (and time) of the observation. E.g.
  19. 'date(yyyy-MM-dd)'. The format follows the rules for
  20. Java's SimpleDateFormat class.
  21. DEC Declination coordinate (in radians). E.g. 'DEC(a:0)'
  22. produces the declination in ddd:mm:ss format. See also
  23. 'RA' and 'DECd'.
  24. DECd Declination coordinate (in degrees). E.g. 'DECd(f1)'
  25. produces output in ddd.d format. See also 'RAh' and
  26. 'DEC'.
  27. descriptor A descriptor string for the scan (e.g. the scan number
  28. or file name used to invoke the scan).
  29. dir Scanning direction. E.g. 'HO', 'EQ', 'EC', 'GL', 'SG'.
  30. EL Elevation coordinate (in radians).
  31. ELd Elevation coordinate (in degrees).
  32. epoch Full coordinate epoch, e.g. (J2000.0).
  33. epochY E.g. epochY(f1) -> 2000.0
  34. frames Number of unflagged frames in the scan.
  35. FWHM Mean beam FWHM of the instrument (in 'sizeunit').
  36. gain Instrument gain.
  37. generation Source model generation.
  38. hipass Highpass filtering timescale (in seconds)
  39. humidity Ambient humidity (if recorded).
  40. id Scan identifier (e.g. scan number)
  41. integrations Number of integrations (subscans) contained in the
  42. scan.
  43. LST Local Sidereal Time (in seconds). E.g. use 'LST(t:1)'
  44. to format is as 'hh:mm:ss.s'.
  45. LSTh LST in hours.
  46. maxchannels Maximum number of channels the instrument can have
  47. (effectively the number of channels stored in the
  48. data file).
  49. maxFWHM Smallest beam of the instrument (in sizeunit).
  50. minFWHM Largest beam of the instrument (in sizeunit).
  51. MJD Modified Julian Date of the scan.
  52. mount The focus in which the instrument is mounted.
  53. NEFD Average Noise Equivalent Flux Density (Jy sqrt[s]) as a
  54. measure of the array sensitivity.
  55. object Name of observed object.
  56. observer Name(s) of the observer(s).
  57. obshours Effective on-source time (in hours).
  58. obsmins Effective on-source time (in minutes).
  59. obstime Effective on-source time (in seconds).
  60. okchannels Number of good (unflagged) channels.
  61. PA Mean parallactic angle (in radians).
  62. PAd Mean parallactic angle in degrees.
  63. pnt.angle Source elongation angle on map (degrees).
  64. pnt.asymX Asymmetry in telescope X (%).
  65. pnt.asymY Asymmetry in telescope Y (%). `
  66. pnt.AZ Azymuth pointing (arcsec).
  67. pnt.EL Elevation pointing (arcsec).
  68. pnt.DEC Declination pointing (arcsec).
  69. pnt.dasymX Asymmetry uncertainty in telescope X (%).
  70. pnt.dasymY Asymmetry uncertainty in telescope Y (%). `
  71. pnt.dangle Source elongation angle error (deg).
  72. pnt.dAZ Azymuth pointing residual (if available).
  73. pnt.dDEC Pointing residual in the declination dir (if available).
  74. pnt.dEL Elevation pointing residual (if available).
  75. pnt.delong Source elongation error (%).
  76. pnt.delongX Source elongation error in telescope X direction (%).
  77. pnt.dNasX Pointing residual in the Nasmyth X dir (if available).
  78. pnt.dNasY Pointing residual in the Nasmyth Y dir (if available).
  79. pnt.dRA Pointing residual in the RA direction (if available).
  80. pnt.dX Pointing residual in native X direction.
  81. pnt.dY Pointing residual in native Y direction.
  82. pnt.elong Source elongation (%).
  83. pnt.elongX Source elongation in telescope X direction (%).
  84. pnt.RA RA pointing (arcsec).
  85. pnt.X Aggregated X pointing correction (including applied
  86. corrections and sometimes the pointing model too).
  87. pnt.Y Aggregated Y pointing correction (including applied
  88. corrections and sometimes the pointing model too).
  89. pressure Ambient pressure (if recorded) in hPa.
  90. project Project name (if defined)
  91. ptfilter The point-source flux filtering throughput of the
  92. reduction.
  93. RA Right-ascention coordinate (in radians). E.g. use
  94. 'RA(h:2)' to format as 'hh:mm:ss.ss'.
  95. RAd Right-ascention coordinate (in degrees).
  96. RAh Righ-ascention coordinate (in hours).
  97. rate Sampling rate of the data (Hz). See also 'sampling'
  98. resolution Instrument resolution (i.e. FWHM of the main beam).
  99. in 'sizeunit'.
  100. rmsspeed RMS fluctuation of the scanning speed (arcsec/s).
  101. sampling Sampling interval (seconds). The inverse of 'rate'.
  102. scale Scaling factor applied to the scan.
  103. scanspeed Average scanning speed (arcsec/s).
  104. serial Serial number of the scan.
  105. sizeunit Size unit for measuring resolutions. E.g. 'arcsec'.
  106. src.a Major axis (arcsec) of the source ellipse.
  107. src.angle Orientation of the source ellipse (deg).
  108. src.b Minor axis (arcsec) of the source ellipse.
  109. src.da Uncertainty of the minor axis (arcsec).
  110. src.dangle Uncertainty of the orientation (deg).
  111. src.db Uncertainty of the major axis (deg).
  112. src.dFWHM Uncertainty of the source FWHM (arcsec).
  113. src.dint Uncertainty of the integrated source flux (Jy).
  114. src.dpeak Uncertainty of the peah source flux (Jy/beam).
  115. src.FWHM Source FWHM (arcsec).
  116. src.int Integrated source flux (Jy) insize and adaptive
  117. aperture.
  118. src.peak Peak source flux (Jy/beam).
  119. stat1f A measure of the 1/f noise averaged over the array.
  120. The configurations options '1overf.freq' and
  121. '1overf.ref' define the frequencies for the 1/f
  122. measurement and white-noise reference respectively.
  123. Tamb Ambient temperature (if recorded) in degrees C.
  124. tau The in-band, line-of-sight opacity value.
  125. tau.<id> The zenith tau value for <id>. E.g. 'tau.225GHz' or
  126. 'tau.PWV' if these are defined by appropriate scaling
  127. relations.
  128. UT UT in seconds. E.g. use 'UT(t:1)' to format is as
  129. 'hh:mm:ss.s'.
  130. UTh UT in hours.
  131. weight The relative weight of the scan, based on the actual
  132. noise of the map it generates.
  133. winddir Wind direction (if recorded) in degrees.
  134. windpk Peak wind speed (if recorded) in m/s.
  135. windspeed Wind speed (if recorded) im m/s.
  136. zenithtau In-band opacity at zenith.

Source model specific log entries

  1. map.beams Number of (smoothed) beams in the map.
  2. map.contentType Type of data stored in the map.
  3. map.creator Creator's name or description.
  4. map.depth Weighted average depth of the map in map units.
  5. map.max Maximum value in map units.
  6. map.min Minimum value in map units.
  7. map.name Name of map data (e.g. object's name)
  8. map.points Number of pixels in the map.
  9. map.rms Typical RMS of the map in map units.
  10. map.size Size of the map in pixels e.g. '121x432'.
  11. map.sizeX Map size in the 'x' direction (pixels).
  12. map.sizeY Map size in the 'y' direction (pixels).
  13. map.system Coordinate system id, e.g. 'HO', 'EQ', 'GL' etc.
  14. map.unit Name of map unit, e.g. 'Jy/beam'.
  15. phot.flux Photometric flux in Jy/beam (e.g. for LABOCA).
  16. phot.dflux Uncertainty in the photometric flux (Jy/beam).
  17. skydip.kelvin Data units that corresponf to a 1K load on the detectors.
  18. (for skydip data, if and when fitted.)
  19. skydip.dkelvin Uncertainty in data units per kelvin conversion.
  20. skydip.tau Tau value derived (or assumed) from skydip data
  21. skydip.dtau Tau uncertainty from skydip data
  22. skydip.tsky Derived (or assumed) sky temperature in K.
  23. skydip.dtsky Sky temperature uncertanity (K).
  24. smooth Smoothing applied, in the default size unit of
  25. the instrument (e.g. in arcsecs).

Telescope specific log entries

APEX: see the `crush/Documentation/README.apex.

Instrument specific log entries

See the crush/Documentation/README.<instrument>, for each <instrument>.


6. Supported Cameras

Currently, CRUSH supports the following instruments (in alphabetical order):

  1. GISMO (2mm) Goddard-IRAM Superconducting 2-Millimeter Observer
  2. www.iram.es/IRAMES/mainWiki/GoddardIramSuperconductingTwo\
  3. Millimeter
  4. LABOCA (870um) Large APEX Bolometer Camera
  5. www.apex-telescope.org/bolometer/laboca
  6. PolKa (polarimetry) Polarimeter for LABOCA
  7. www.mpifr-bonn.mpg.de/staff/gsiringo/laboca/laboca\
  8. _polarimeter.html
  9. SABOCA (350um) Submillimeter APEX Bolometer Camera
  10. www.apex-telescope.org/bolometer/saboca
  11. SCUBA-2 (450um, 850um) Submillimetre Common User Bolometer Array 2
  12. www.roe.ac.uk/ukatc/projects/scubatwo
  13. SHARC-2 (350um, 450um, 850um) The second-generation Submillimeter
  14. High-Angular Resolution Camera
  15. Caltech, Pasadena, CA
  16. www.submm.caltech.edu:/~sharc
  17. SOFIA/HAWC+ (53um, 62um, 89um, 155um, 216um)
  18. High-resolution Airborne Wide-angle Camera

The following instruments have legacy support only (supported by CRUSH 2.4x
and earlier releases):

  1. ASZCa (2mm) APEX SZ Camera, from Berkeley, CA
  2. bolo.berkeley.edu/apexsz/instrument.html
  3. MAKO (350um) KID technology demonstration camera for the CSO.
  4. MAKO-2 (350um, 850um) Second-generation KID technology demonstration
  5. camera for the CSO with dual-pol pixel response, and dual-band
  6. imaging capability.
  7. MUSTANG-2 (3mm) Large focal plane array for the 100m Greenbank Telescope.
  8. www.gb.nrao.edu/mustang/
  9. p-ArTeMiS (200um, 350um, 450um) 3-color camera for APEX (prototype)
  10. www.apex-telescope.org/instruments/pi/artemis
  11. SHARC (350um, 450um) Submillimeter High-Angular Resolution Camera
  12. Caltech, Pasadena, CA
  13. http://www.submm.caltech.edu/cso/sharc/cso_sharc.html

Further support for instruments is possible. If interested to use CRUSH for
your bolometer array, please contact Attila Kovacs
<attila[AT]sigmyne.com>.

In principle, it is possible to extend CRUSH support for other types of scanning
instruments, like heterodyne receiver arrays, or single-pixel receivers that
scan in frequency space, or for the reduction of line-surveys from double
side-band (DSB) receivers. Such new features may appear in future releases,
especially upon specific request and/or arrangement…


7. Future Developments (A Wish List…)

There are a number of plans for new features in CRUSH. Some may come in the
near future, others perhaps later, depending on the resources available for
implementation. Nonetheless, the following avenues of feature enrichment are
being considered.

7.1. Support for Heterodyne Receivers and Interferometry

CRUSH was born as a bolometer array reduction package. However, there is no
reason why many of its principles could not be applied to other types of
instruments (astronomical or otherwise), especially those, which are affected
by correlated signals. Besides, CRUSH also provides powerful tools for other
analysis steps, like weighting, despiking, spectral filtering, and mapping.

Two obvious extensions would be heterodyne receivers (and receiver arrays)
and the use of CRUSH for interferometry (e.g. ALMA) since it is well-suited
to deal with immense data volumes also.

7.2. Interactive Reductions

At present CRUSH offers only a reduction pipeline. This is ideal for
crunching large volumes of data in a more-or-less automated fashion, and
for making reduction painless. It is also ideal for most users, who might
not wish to learn about the intricacies of each and every reduction step.
However, in some cases having more control may be beneficial.

Many astronomers are used to interactive reduction packages (e.g. GILDAS
AIPS, BoA etc.). CRUSH should eventually offer such a capability also. This
would allow step-by-step reductions, together with varous plotting
capabilities to look at data at every stage. Such a capability would be
very useful in the process of building pipelines for new instruments

It should be relatively simple to provide this feature. The current
configuration languange of CRUSH can be easily adapted for more interactive
use. Even more detailed control may be possible though a standard scripting
language like JavaScript or Rhino. The main job would be to supply the
essential plotting capabilities, but that too may come sooner than later…

7.3. Distributed and GPU Computing

The reduction paradigm of CRUSH is massively parallelizable. CRUSH can
already make good use of any number of processing cores within a computer.
It should be quite staighforward to extend implementation over several
computing nodes and super-computers, allowing orders of magnitude increases
in data reduction speeds and data volumes.

Another way of improving speeds may come from the use of Graphical (GPU)
Computing. Today’s grahics chips provide computing power well in excess of
that offered by the CPUs. This can be harnessed with programming tools
like CUDA or OpenCL. GPU computing is still in its infancy, and thus
its languange specifications are likely very fluid. But technical
details aside, GPUs clearly offer a way for boosting the number crunching
performace of CRUSH.

7.4. Graphical User-Interface (GUI)

The addition of a Graphical User Interface (GUI) to CRUSH would make
configurations more transparent and intuitive. It would allow users, who do
not use CRUSH often, to avoid learning the complexities of command-line
options and scripting support, and instead click their way through the
essential configuration settings.

GUIs may also aid the more expert users, in providing a way to look at
details, such as monitoring quantities, signals or maps during the reduction.


8. Further information

CRUSH-2 is the next-generation data reduction package, inheriting its DNA from
the pioneering SHARC-2 specific version (crush-1.xx) and from the APEX specific
minicrush implementation. It is a much more capable package than either of its
predecessors. However, because the output FITS images are backward compatible
with the older versions (for the most part), parts of the original CRUSH
package can still be useful for manipulating images post reduction.

Mainly, the crush-1.xx distribution provides tools for displaying (show),
minupulating (imagetool), and coadding (coadd) or differencing
images (difftool), and producing histogram (histogram) from them.

Downloading the CRUSH package(s), and further information on the FITS images
is available at:

http://www.sigmyne.com/crush

I hope you’ll find this helpful and may you end up with beautiful images!!!


Copyright (C)2019 — Attila Kovacs