Power Reporting using Spectre (Clinical)

Introduction

This document will give you information on how to get the maximum benefit from Spectre (Clinical) and how to work at speed with it.

Validating reporting macros

The get the full benefits from Spectre, you need to validate the reporting macros you use. This applies not only to the Spectre reporting macros themselves, such as %npcttab and %unistats and their sub-macros, but other macros that you write that might call these macros, such as for lab tables. You should aim at building up a comprehensive set of reporting macros that can be used for reporting different studies and have them fully validated and stored in a read-only library. This should be done in accordance with your Standard Operating Procedures with test pack(s), full documentation and change controls applied. Although this is a lot of work to do up-front, it will save a lot of validation effort and time in the long run.

Before leaving this topic, you have to consider how you are going to design your reporting macros that you are going to validate.

Designing reporting macros

The reporting macro you will validate must be stable and not have to be amended from study to study, or you will have to do the validation work every time. It is likely that your data selection rules will vary from study to study. If this selection of data can be easily done by passing one or a few parameters to a study reporting macro, then it is best to do it like that. Hopefully, you have useful flags in your data to simplify the data selection process. But then again, your data structure might vary from study to study and so having a generic macro to handle data selection as well might be difficult. If that is the case, this data selection part should be left out of your reporting macro. Instead, the reporting macro should work on your data after you have made the selection. Your reporting macro would not be generic if you had to change the code inside the macro and nor could it be "validated" in any sense of the word. So, if you can, have the macro validated and self-contained so that you can easily select on the data you want by setting parameters. That way the macro could be called in your "sascode:" line in your titles member. But if this is not possible then the macro should be designed to be called after you have made the data selection in your code.

Data selection rules

The data selection rules should be clear for every reporting program you create. Even with validated macros, your data selection rules will need to be QC'ed. This will likely be done with a code review so it is important that both the programmer and the QC programmer know what the selection rules are for the data. There should be a reference document for this that lists the data selection rules for each and every table.

Utility macros

You should never need to write a non-statistical utility macro so if you feel you need one then check all the existing macros as it is 99% certain that it already exists somewhere. Be aware of what macros already exist within Spectre. You can see a complete list of these macros and their purposes here. If you don't see what you want then try the list of macros on my old web site. Nearly every single macro in Spectre originally came from the old web site. You can see the complete list here. I spent since 1986 building up that collection so the list is fairly complete, but if you do spot the need for a utility macro (non-statistical) that I haven't thought of then send me an e-mail.

Setting up and exporting Unix variables for studies

If you are working on multiple studies then you need a quick method of moving to these study folders. You can set up Unix variables for this in your .bashrc member or .bashrc_own or whatever is the file in your home directory you are allowed to edit. A study might have a programs folder, a derived datasets folder, a stats dataset folder and a raw dataset folder. If this study could be meaningfully referred to as "abc" then you could set up variables and export them like this:
 
abcp=/data/sas/xxx/pr0g/drug/study/inc/programs
abcd=/data/sas/xxx/data/drug/study/inc/der 
abcs=/data/sas/xxx/data/drug/study/inc/stat 
abcr=/data/sas/xxx/data/drug/study/inc/rawdata
export abcp abcd abcs abcr

Once you have done this for all the studies you might be working on then it is easy to move to a folder of your choice using these Unix variables like this:
 
cd $abcp

And once you have done your work in that folder then you can return to the previous folder using the command:
 
cd -

These Unix variables are useful in other Unix commands. For example, here is how you would copy all the macros owned by you (assuming they match the pattern m_*.sas) from the programs folder of study "abc" into your current folder:
 
cp $(myfiles $abcp/m_*.sas) .

aliases

While we are on the subject of editing our .bashrc (or .bashrc_own) members, there are always aliases we can set up in it to save ourselves a bit of time. I have aliased the "crtitlesds" command to "crt" as I use it thousands of times so I wanted to make it shorter, plus I have other aliases set up. Here is an extract from my .bashrc file:
 
alias rm='rm -i'
alias cp='cp -i'
alias p='echo $prog'
alias c='cat $prog.titles|grep ^sascode:'
alias r='cat $prog.titles|egrep -i "^table |^appendix "'
alias dtv='dtpad -viewOnly '
alias crt='crtitlesds'

You will see from the above that I have aliased a few things so that I can extract information from my .titles member quickly. "p" will give me my current program name, "c" will give me my "sascode:" entry and "r" will give me my table/appendix reference number. I could enter the command "titles" and just look inside my .titles member instead, but using these aliases saves me a tiny amount of time and multiplied by a thousand or more, it makes it worthwhile to set them up.

xterm windows

One of the advantages of Spectre over a web based system is that by staying close to Unix you have a host of useful utilities at your disposal. You can also use as many xterm windows as you wish so that you could have windows with "cd" set to directories in different studies at the same time. I normally work with three xterm windows and add more as required using the command below which opens a new window while keeping the current window active:
 
xterm &

You can customize the xterm windows and I do this to reduce eye strain. I find the default black text on a white background too tiring to look at for long periods so I change the background colour. I also do this for interactive SAS® software sessions. You can either create or edit an existing file in your home directory named .Xdefaults to customize your xterm windows. This is what I have in my own .Xdefaults file. Lines starting with a "!" are treated as comment lines. You will see that I have set my background colour for xterm windows to "AntiqueWhite". Note that for SAS software interactive sessions the background is set to "wheat" except I have commented it out because it does not work for me any more.
 
xterm*background: AntiqueWhite 
xterm*geometry: 81x24 
!sas*background: wheat 
!xterm*foreground: black 
ghostscript*background: white 
ghostscript*foreground: black 
!ghostscript*useXPutImage: false 
!ghostscript*useXSetTile: false 
!ghostscript*useBackingPixmap: false

For changing the background color of my interactive sas sessions, I used to be able to do it as shown above with the "sas*background: wheat" entry but that does not work for me any more. Now I do it by using "reshelper" instead, called from the toolbox command box. If you click on the "Colors" button and double-click on the background color "white" (or whatever the background is set to) and select a different color (from the list displayed) then that color will replace the white (this only works for Unix/Linux).

"fsv" scripts

You have a number of scripts at your disposal to allow you to browse SAS software datasets directly from the Unix command line in the current directory. You will find using these is easier than starting up an interactive SAS software session and also better. You should get used to using these as they will speed your work. They are as follows:
  • fsv: Browse a dataset with all formats applied
  • fsvraw: Browse a dataset with no user formats applied
  • fsvdc: Browse a dataset with user formatted fields shown unformatted and formatted next to each other
  • fsvacct: Browse a dataset with fields from the acct dataset merged in as specified
  • "contents" scripts

    There are two scripts that do a "proc contents" on the specified datasets or a whole library that you should find more useful than running "proc contents" yourself. The reason being, these scripts put the output in a form that can be passed to other Unix utilities. Suppose you wanted to know what variables were in what datasets that were something to do with "date" then you could find out like this:
     
    contentsl | fgrep -i date

    Using the above is far faster and better that running proc contents yourself and you will often make use of it. The two scripts are as follows:

    "formats" scripts

    You will often need to know what formats exist either for a study or local to a directory and how the start values map to their labels. The scripts that exist to help you are again more convenient than using SAS software yourself. Scripts such as these put information about SAS software objects at your fingertips in your Unix session and help speed your work. The scripts are as follows:

    batch development scripts

    Spectre is very much Unix oriented. Part of the reasoning behind it is that once programmers know the Unix environment well then greater speed plus a higher number of programs per programmer should be possible than by going through an interactive reporting system. Recognizing that most programmers are used to developing all their code in interactive SAS software sessions, scripts exist to encourage you to do this in batch, instead, by providing scripts that make batch development just as convenient as interactive development. One of the most convenient aspects of interactive development is the ease of moving from window to window like the program window to the log window to the output window. Spectre provides scripts so that you can do the same in batch mode. It relies on your declaring what your program is to a Unix variable named "prog" that you have exported. Suppose you were working on a demog table named "t_demog" then you declare it to the "prog" variable like this:
     
    export prog   # only needed once per xterm window
    prog=t_demog

    Once your program name (without the .sas extension) is declared to the "prog" variable then the following scripts can be invoked without values to work on your program:

    Habits die hard so it might take you a long time to change over from developing your code interactively to doing it in batch. But when you become good with Unix, perhaps with the help of the learning materials on this web site, you will be tempted to move over to batch development and you will find that the above scripts will make you feel just at home using batch as you did developing your programs interactively. Some time after you have made the transition you will find that you are working faster in batch than you ever did when working interactively.

    wt

    "wt" is short for "which titles member". It is short so you can type it in fast. It has a table reference number for an argument and it returns the titles member without the ".titles" extension. Nearly everybody you will be in contact with will refer to tables and listings by their table or listing reference number so you need to translate this into the program name quickly. You will be using "wt" a lot to switch to a new value of "prog" using copy and paste like this:
     
    $ wt 13.2.1.3
    t_demog
    $ prog=t_demog

    Conclusion

    Hopefully, the above information will prove useful to you for getting the most out of Spectre and allowing you to work at speed.

     

    Use the "Back" button of your browser to return to the previous page

    contact the author








    SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration.