Narrowband Editing Workflow for PixInsight

One of the hardest parts of learning PixInsight is knowing what processes to use & when. While there can never be a one-size-fits-all guide, this guide outlines the workflow that I typically use when processing my narrowband images, and you may find it a helpful reference as you edit your own image. 

This guide assumes you:

  1. Are familiar with basic PixInsight navigation — e.g., how to apply AutoSTF, the difference between linear and nonlinear images, etc. Though it doesn’t walk through the mechanics of each tool in detail, as I find that is best learned via video tutorials, PixInsight’s documentation, or hands-on experimentation.
  2. Have already stacked your master lights using something like WBPP
  3. Have the following repositories installed (or are comfortable using alternatives):
 Plugin Repository
(Put these in Resources > Updates > Manage Repositories)
Alternative
NarrowbandColourMapper https://www.cosmicphotons.com/pi-scripts/nbcolourmapper/ Can use PixelMath, but NarrowbandColourMapper is really convenient.
BlurXTerminator https://www.rc-astro.com/BlurXTerminator/PixInsight/ I'm not impressed with alternatives at the time of writing. You can skip this if you don't have it, or ask a friend to do it for you.
StarXTerminator https://www.rc-astro.com/StarXTerminator/PixInsight/ StarNet
GeneralizedHyperbolicStretch https://www.ghsastro.co.uk/updates/ Use whatever other stretching method you're comfortable with
DeepSNR https://pixinsight.deepsnrastro.com/ NoiseXTerminator
RC Astro's TensorFlow GPU Accelerator (Windows Only) https://www.rc-astro.com/TensorFlow/PixInsight/GPU Linux Instructions
Can also just go without this

 

Linear Workflow

Step 0: Open your stacked master narrowband images

Close your integration weight images & rejection maps that open alongside your master lights. AutoSTF each channel. 

Step 1: Rename your images

I like naming mine S, H, and O. 

Step 2: Use LinearFit

I like to LinearFit my channels so they're roughly the same brightness. This makes the following steps easier. To do this, set one channel as your reference in LinearFit, and apply the process to each of the other channels.

Step 3: Use NarrowbandColourMapper

Use the NarrowbandColourMapper script to assign colors to your channels and combine them. Adam Block has a great video explaining how to use this script. I recommend turning BackgroundNeutralization on in the settings while you're editing the colors, but then turn it off before exporting the colorized image. Then use an unlinked AutoSTF to view your image until after you've done BackgroundNeutralization in a couple of steps.

Step 4: Gradient removal with DynamicBackgroundExtraction

While you can use GradientCorrection, AutomaticBackgroundExtraction, or GraXpert, DynamicBackgroundExtraction is the superior background extraction method. It takes practice  and I recommend you look up a detailed guide or ask an experienced friend for help — but here's a high-level overview of how DBE works:

In DBE, you place sample points on regions of your image that you want to make a uniform brightness; typically you place these on the "background" parts of your image where there is just empty space. DBE averages the pixels within each sample point, ignoring pixels that are too dark or bright, as defined by the Tolerance parameter. It then takes those averaged samples and calculates a smooth, interpolated background model between them. Then, it subtracts (or divides) that background model from your original image, leaving you with a corrected image where all of the sample regions you selected are now the same brightness/color. 

Some notes:

  1. Set "Correction" to either "Subtract" or "Divide" based on what type of background you are correcting. Divide is for removing multiplicative gradients (e.g., vignetting from your lens or nonuniform flats). Subtract is for additive gradients (e.g., light pollution). If you're unsure which to pick, it doesn't matter too much; just roll with one and see if it looks good. 
  2. Enable "Normalize" and "Replace target image"
  3. Place your sample points on regions with no stars. Turn your "Tolerance" setting up such that the background is selected (shown as white in the preview window) and any stars are rejected (black). 
    1. For simple gradients, a handful of sample points around the edges/corners of your image should suffice. Just make sure you're not placing them on the structure of the nebula/galaxy. 
    2. If you're a pro, you can remove the stars prior to this step, more-easily generate a background model using the starless image, undo the background extraction and star removal, then re-subtract/divide the star-containing image by the background model you generated from the starless image. The details of this are a bit advanced for this guide, but for those interested the pixelmath is $T - bg + med(bg), or $T * med(bg) / bg depending on whether you want to subtract or divide your background model. 
  4. Don't tell anyone I told you this, but sometimes AutomaticBackgroundExtraction does an OK job. Just turn the function degree down to 1 for linear gradients or 2 for anything more complex. 

Step 5: Background Neutralization

To fix color cast, create a preview of a starless portion of the background whose color you'd like to turn a neutral grey. You can create a preview using this button:

Then apply BackgroundNeutralization using that preview window as your reference. After this is complete, switch to a linked AutoSTF.

Step 6: Deconvolution

Run BlurXTerminator 

Step 7: Star Removal

Run StarXTerminator (or StarNet). If you have RGB stars you'd like to add in separately, you can discard the narrowband stars. To process the RGB stars, refer to the instructions for processing the RGB channels in my LRGB editing guide.

 

Nonlinear Workflow

Step 8: Stretching

I use GeneralizedHyperbolicStretch, but you can use whatever method you're familiar with. You don't need a fully dialed-in stretch just yet. A rough approximation of your final stretch will do, as you'll get finer adjustment using the CurvesTransformation process.

Step 9: Denoising

Use DeepSNR (or NoiseXTerminator)

Some Additional Tweaks

These next steps can be done in any order you want whatever looks good to you. Feel free to repeat steps, too. But here's the rough order of operations that I usually follow:

Step 10: Curves

CurvesTransformation is the most powerful tool in PixInsight. I use it for everything:

  • Want to increase the contrast? Curves.
  • Want to adjust your stretch beyond your initial stretch? Curves.
  • Want to tweak the hue? Curves.
  • Want to eliminate a background color cast? Curves.
  • Want to adjust your background brightness? Curves.

Step 11: Saturation

Curves is excellent for adjusting saturation globally, but if you want to target a specific hue, the ColorSaturation script is the way to go. 

Step 12: LHE

LocalHistogramEqualization is very easy to overdo. If you use it at all, you should have your "Amount" slider turned way down. 

Add the stars back

Step 13: Stretch the Stars

Use your preferred stretching method to stretch your stars. 

Step 14: Saturate Stars

The stars usually need a hint of saturation. I use the CurvesTransformation process. 

Step 15: PixelMath to Add Stars Back

Use the following PixelMath expression to add the stars back to your background:

~(~stars*~starless)

Step 16: Crop & Rotate

Technically, you could do this anytime, but I like saving it for last. Use DynamicCrop and FastRotate to get your desired framing. 

Leave a comment