The COMPASS Manager Informaton Page

This page is intended to provide a resource for the maintainer(s) of the COMPASS analysis system. Mostly this page provides an overview of the system and recipies for performing routine tasks.

Overview


Software Layout


Installation


Maintenance Tasks

Software Descriptor Transfers

 

 
 
 
 
 

Dataset Descriptor Transfers

Dataset descriptors have traditionally been transferred between sites using the oracle "exp" and "imp" facilities.  Each site would periodically export their data set descriptor (DSD) records to a ".dmp" file and then ftp the files to the other sites using the groftp account.  At some point the dataset transfers started being done automatically via the network but it was agreed that the transfer of the ".dmp" files would continue as a backup and check of the automatic transfers.   For the most part the import of dataset descriptors is not necessary and should indicate that all the records have already been transferred.  If this is not the case the automatic network transfer scripts should be checked.

The dataset descriptors arrive via the groftp account from the other sites.  The convention for file name differs slightly from site to site but the file should be something like, datrol153.dmp.Z.  When these files arrive they need to be processed in order.  First mv them to the $COMPASS_HOME/sbm/dmp directory.  Then using SQL find the last descriptor from the sending site.  The can be done as follows:

          ./check_ddt [RMSU]

Where the one letter argument specifies the sending site.  The ddt table records the dataset descriptor transfers.  Make sure the file you are going to import in the next in sequence.   Uncompress/gunzip the file to be imported, then start sbmshell as comdba and follow these steps:

Move down the forms,  SYS_MAINT and DSDUTILS.  Run the tool imp_dsd_dat which will import the descriptor dump into the comimp user.   This script will ask for an input file name and will assume the file in located in the $COMPASS_HOME/sbm/dmp directory.

Importing Datasets

Exporting Datasets

The general procedure for exporting data consists of three steps:
  1. Insert entries into DTS table
  2. Convert datasets to export format and update database
  3. FTP files to the receiving site
There are several scripts which assist the users/managers in data exports but nothing that "does it all."

Filling in the DTS table can be done by using the sbmdts form. Make sure you start it as user COM. Here's an example form filled in:

                       Transfer Status of Datasets                Form SBMDTS
                         (Database Table DTS)                     User SBM_RO
                 Transfer   Request                               Version 2.1
   Dataset Id  to/from Site   Code          User ID             Date    Priority
  M TIM 20981    TO     N      T    tmillima                  20-APR-98        
  M TIM 20949    TO     N      T    tmillima                  20-APR-98        
  M TIM 20951    TO     N      T    tmillima                  20-APR-98        
  M TIM 20952    TO     N      T    tmillima                  20-APR-98        
  M TIM 20957    TO     N      T    tmillima                  20-APR-98        
  M TIM 20958    TO     N      T    tmillima                  20-APR-98        
  M OAD 27512    TO     N      T    tmillima                  20-APR-98        
  M OAD 27416    TO     N      T    tmillima                  20-APR-98
The fields describing the dataset should be obvious. The "to/from" field should be "TO" for exports. The "site" field is one of the following: The "Code" in the above form corresponds to the DTS column "status" and cannot be NULL. The acceptable values are "T" (for default Transfer), "I" (for IBM format transfer) and "S" (for Sun format transfer). The default transfer currently uses Sun format. The user-id should be filled-in so that if there are questions you can be contacted and the date should be set to whenever the request was made. Don't worry about the "priority" its pretty much obsolete. For exports to GROSSC there is a script (ssc_data_update) which should help fill in the DTS table. Before running this script you need to update our local copy of the ACC table.  This is done by connecting to our sbm account the issuing an sql command like the following:
insert into acc select * from drg.acc@m where
def_date > TO_DATE('27-06-1998','DD-MM-YYYY');
This table contains the information on when datasets become public.  Running the ssc_data_update script  will then insert into the DTS entries for all the data which have been made public since the last data transfer to the GROSSC. See the special section for GROSSC procedures below. While it is not enforced at this level for the datasets to be exported they must have a dsd_quality of at least 100.

The next step is converting the files and updating the database. The actual file conversion is done using the program expdat the source of which resides in ${COMPASS_HOME}/sbm/impexp. To prepare just one file for export, the user may invoke dataexp as follows:

  datexp [-d] [-p input-path] [-e output-path] dataset-id [domain]
  
  e.g.
  dataexp -p /home5/comdat.6 UNH-BRM-1234 -e U1234.BRMTOMS
  or
  dataexp -p /home/aconnors/comtest/data  UNH-BRM-1234 -e U1234.BRMTOMS  ATC
Normally, however, the conversion and database updates are handled by the script export_data which currently resides in /home/exports. It uses the DTS table entries to select datasets, converts them, updates the TRDS (or SSC_TRDS) tables and on a good day will even try to FTP the smaller files ( < 1MB).

To run export_data, cd to an area with some scratch space--like /arch/tmp. Then run the script /home/exports/export_data in the background. The files will be sent via ftp to the requested site unless either the file is larger than 1 MB (when compressed) or the network is down at the remote site. In either of these two cases, the files are held in a subdirectory ./ftpXXX where 'XXX' is the site to which the files are being sent. Large files will be stored in a subdirectory ./tape. After the export has run any files which did not get ftp'd automatically or files which will be transferred via tape must be taken care of.

The script export_data uses a utility in /home/exports called putftp to ftp the files. There is also an expect script. ${COMPASS_HOME}/sbin/groftp_send for transferring files. Tape transfer is used only rarely and should probably be avoided. If you need to transfer via tape just create a tar tape of the files using a command something like:

        gtar -c -v -f /dev/nrst1 *SENDTONS.gz
This will create a tape of the matching files. When you send a tape just indicate on the label that it's a tar tape.

Software Transfers Between Domains

Code can be moved from a test domain by creating the correct directory in /home/compass/lib/src or in /home/compass/lib/include and copying the files directly there. You can tell which libraries are needed by running the get_link_list command for both the test domain and production. For example if you are transferring a task from the mtm domain to production you can compare the output of:
                get_link_list mtm simpsf
and
                get_link_list mcconn simpsf
It is worth checking the files you've copied against the mem table records to be sure that only the files you want have been picked up and that they are named correctly. You should also check to see that each file has the proper header line containing the library and version. The source directory should NOT contain the include files as they are to be links to the production include library.

The next step is to move the task database information into production you must enter sbmshell as COMDBA and follow this path: SBMSHELL - SBMDOMUTILS - SOFTRAN. If there has never been a transfer from this test domain to production you must set up access to that domains tables from SBM by running the SBMSWTRG tool from the softran form. This needs to be done only once in the life of a test domain since the grants are permanent. Then transfer the task into production using the tool SBMSWTRT in the SOFTRAN form. The transfer must be made to SBM not COM (when you are asked for a destination id).

At this point it is good to compile the libraries to see that they really compile (sometimes they don't to the shock of the developer). It is also possible to pick up on table errors here such as the developer leaving out the lib table info etc.. A lack of some table information won't stop it from working in a test domain but can cause a massive loss of information when you transfer the table information into production which will stop it from working.

Once everything has compiled you can make it current. Go into sbmshell with this path: SBMSHELL - SYS_MAINT - SOFUTILS. You can run the SBMTVCUR tool in sofutils to make that version of the task current.

The next step is to transfer the job. You should wait until after you make the task current, if it is a new task, since a job will not transfer unless all of its tasks are current in the receiving id. It is always important to transfer the job for all new tasks since the PDM table information is tied to job id but can change from version of task to task. To transfer the info go into sbmshell with the following path: SBMSHELL - SBMDOMUTILS - SOFTRAN and run the tool SBMSWTRJ to move the job into SBM. Even thought it is important to move the new pdm information into production with the task it can cause problems. Some developers do not have the pdm information in their domains (not knowing they need it) and you can in fact destroy the pdm entries for that task. I take the time to check by hand what is in both production and the test domain and say something if theere is no information in the test domain but there is some in production.

It is a good idea to go into compass and bring up the task parameter form for new tasks to be sure that there are no new datatypes or bad default values. They will show up as errors when the task parameter form comes up. 


Science Support Center Tasks


Oracle Database Info/Tasks

Oracle File Locations

All of the Oracle software should be located in the $ORACLE_HOME directory. The file /etc/system contains modifications (for shared memory). The modifications that have been added are:
*
* additions for oracle
*
set shmsys:shminfo_shmmax=25165824
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10
set semsys:seminfo_semmns=200
set semsys:seminfo_semmni=70
set semsys:seminfo_semmsl=35
Other important files (especially tnsnames.ora) have locations specified at install time. Our tnsnames.ora resides in /var/opt/oracle. The remaining information about file locations is contained in the configuration files for the database or in the database itself. Right now the configuration files are stored in the directory $ORACLE_HOME/dbs/. The files initcomptel.ora and configcomptel.ora contain the information I've used. There is also a file crdb2comptel.sql which contains the sql to create the database tables and views for the COMPASS system.

The database files are currently stored on the /db disk partition. Here's a simple bourne shell script to list the database files associated with a given tablespace:

     list_data_files () {
         #
         tablespace=$1
         #
         svrmgrl << EOF | grep dbf
         connect internal
         select file_name from sys.dba_data_files where 
         tablespace_name='$tablespace';
         exit
     EOF
}
The list of database files is currently:
/db/comguest.dbf     /db/comind7.dbf      /db/rb2comptel.dbf
/db/comind1.dbf      /db/comind8.dbf      /db/rbscomptel.dbf
/db/comind2.dbf      /db/comtab1.dbf      /db/systcomptel.dbf
/db/comind3.dbf      /db/comtab2.dbf      /db/tempcomptel.dbf
/db/comind4.dbf      /db/comtab3.dbf      /db/toolcomptel.dbf
/db/comind5.dbf      /db/comtab4.dbf      /db/usr2comptel.dbf
/db/comind6.dbf      /db/comtab5.dbf      /db/usrcomptel.dbf
The redo logs are in /home/oracle/dbs and are mirrored in /var/oracle. Automatic archiving of the logfiles is enabled to the directory /dbbackup/redo_logs. Here's a little perl script we run to compress the redo logs:
#!/usr/local/bin/perl
#
# Get the files which are older than $mtmax days old
# and compress them
#
# Check to see that we're the oracle user
#
$login = getlogin || (getpwuid($<))[0];
die "you must be oracle to run shrink_old_logs\n" unless ( $login eq 'oracle' );
#
# Set up some variables
#
$logdir="/dbbackup/redo_arch";
$mtmax=1.0;
#
# check all the logs in the log directory
#
foreach $fpath  ( <$logdir/*.arc> ) {
    print "$fpath\n";
    if (-f $fpath ){
        if ( ( -M $fpath ) > $mtmax )  {
            print "/usr/local/bin/gzip -v  $fpath\n";
            $error=system(("/usr/local/bin/gzip","-v","$fpath"));
            if ( $error != 0 ) {
                die "error $error compressing $fpath\n";
            }
        }
    }
}
exit;

This script runs daily as one of oracle's cronjobs. The backup script should take care of removing redo logs which are no longer needed and keeping this directory clean.

There are three copies (mirrors) of the database control file:

        $ORACLE_HOME/dbs/ctrl1comptel.ctl
        $ORACLE_HOME/dbs/ctrl2comptel.ctl
        /var/oracle/ctrl3comptel.ctl

The other files which are important for the database operation are stored.

Oracle Backups