Skip Navigation Links www.nws.noaa.gov 
NOAA logo - Click to go to the NOAA home page National Weather Service   NWS logo - Click to go to the NWS home page
Climate Prediction Center
 
 

 
About Us
   Our Mission
   Who We Are

Contact Us
   CPC Information
   CPC Web Team

 
HOME > Monitoring_and_Data > Oceanic and Atmospheric Data > Reanalysis: Atmospheric Data > wgrib2-ncep_norm
 

wgrib2: -ave, -fcst_ave

Introduction

The -ave and -fcst_ave options are very similar; they both make temporal averages. The -fcst_ave option uses the verification time and the -ave option uses the reference time for the temporal average.

You would use -fcst_ave to temporally average a single forecast run. For example, you have a 3 week forecast with output every 6 hours. You could use -fcst_ave to find the forecast for the second week.

You would use -ave to temporally average the results of different analyses. Suppose you have analyses every 6 hours and you want to find the analysis for the month.

The input grib file has to be processed in a special order. Don't worry, a grib file can be ordered very easily with the sort command. wgrib2 reads the data sequentially and when ever it encounters a new variable/level/chemical-type, it starts the averaging process. The length of the averaging depends on how many records it finds to average. For example, to make a daily average, a file has to be in the following order.

U500 2000-01-02 00Z             start ave
U500 2000-01-02 06Z
U500 2000-01-02 12Z
U500 2000-01-02 18Z             end ave
V500 2000-01-02 00Z             start ave
V500 2000-01-02 06Z
V500 2000-01-02 12Z
V500 2000-01-02 18Z             end ave
Z500 2000-01-02 00Z             start ave
Z500 2000-01-02 06Z
Z500 2000-01-02 12Z
Z500 2000-01-02 18Z             end ave
To make a daily average of the above file, you need to specify the output file and the time interval between samples. The time units are the same as used by GrADS (hr, dy, mo, yr).
$ wgrib2 input.grb -ave 6hr out.grb
If the file is not sorted, you can use the unix sort by,
$ wgrib2 input.grb | sort -t: -k4,4 -k5,5 -k6,6 -k3,3 | \
   wgrib2 -i input.grb -set_grib_type c3 -ave 6hr output.grb
If you want to make daily means from 4x daily monthly files and assuming that more than one variable/level is in the monthly file.
$ wgrib2 input.grb |  sed 's/\(:d=........\)/\1:/' | \
  sort -t: -k3,3 -k5,5 -k6,6 -k7,7 -k4,4 | \
  wgrib2 input.grb -i -set_grib_type c3 -ave 6hr daily.ave.grb

Using -fcst_ave is like using -ave except you use the verification time instead of the reference time. To make an inventory that use the verification time instead of the reference time, you type,

$ wgrib2 input.grb -vt -var -lev -misc 
1:0:vt=2011040101:PRATE:surface:
2:592224:vt=2011040102:PRATE:surface:
3:1233694:vt=2011040103:PRATE:surface:
4:1909322:vt=2011040104:PRATE:surface:
5:2612620:vt=2011040105:PRATE:surface:
The sed command will be alterered very slightly when making the sort (:d=) -> (:st=).

Fast Averaging

Suppose we have a month of analyses at 3 hour intervals and want to make a monthly mean. Using the above approach, the steps would be

1.  cat narr.201411????.grb2 >tmp.grb2
2.  wgrib2 tmp.grb2 |  \
3.    sort -t: -k4,4 -k5,5 -k6,6 -k3,3 | \
4.    wgrib2 tmp.grb -i -set_grib_type c3 -ave 3hr narr.201411

The first line creates a file with all the data.
The second line make an inventory.
The third line sorts the inventory in the order for -ave to process.
The fourth line makes the average by processing data in the order
  determined by the inventory.

The above approach processes one average at a time and requires minimal amout of memory. However, if you count the I/O operations, you find that there are 4 I/O operations for every field as well as the writes of the monthly means. The following shows another approach.

1.  cat narr.201411????.grb2 | \
2.    wgrib2 - \
3.             -if ":HGT:200 mb:" -ave 3hr narr.201411 -fi \
4.             -if ":TMP:200 mb:" -ave 3hr narr.201411 -fi

The first line creates a file in chronological order and
   sends it to the pipe.
The second line has wgrib2 read the grib data from the pipe.
The third line selects the Z200 fields and runs the averaging
  option on it.  We are assuming the narr.* fields only have
  one Z200 field and narr.201411???? puts the data into
  chronological order.
The forth line selects the T200 fields and run an averaging
  option on it.

The above approach processes the Z200 and T200 data at the same time. The I/O is a sequential read of all the files and the writes of the monthly means. The above script is an illustration and in practice you would have an -if/-ave/-fi for every record in one of the grib files. The limit is that wgrib2 v2.0.1 can process 1000 regular expressions and accept 5000 words on the command line. This implies a limit of 833 (5000/6) -if/-ave/-fi clauses. It is suggested that you use a script to generate the wgrib2 command line such as fast_grib2_mean.sh.

In wgrib2 v2.0.2, the evaluations of the regular expressions are done in parallel using OpenMP. The reduces the overhead of having so many -if options. The configuration has been change to allow more words and regular expressions.

Usage

-ave (time interval)  (output grib file)
-fcst_ave (time interval)  (output grib file)

   only works with PDT=4.0, 4.1 and 4.8

See also:


NOAA/ National Weather Service
National Centers for Environmental Prediction
Climate Prediction Center
5830 University Research Court
College Park, Maryland 20740
Climate Prediction Center Web Team
Page last modified: May 15, 2005
Disclaimer Privacy Policy