Shell script templates [closed]

2019-01-21 04:29发布

问题:

What would be your suggestions for a good bash/ksh script template to use as a standard for all newly created scripts?

I usually start (after the #! line) with a commented-out header with a filename, synopsis, usage, return values, author(s), changelog and would fit into 80-char lines.

All documentation lines I start with double-hash symbols ## so I can grep for them easily and local var names are prepended with "__".

Any other best practices? Tips? Naming conventions? What about return codes?

Comments on version control : we use SVN all right, but another dept in the enterprise has a separate repo and this is their script. How do I know who to contact with Q's if there is no @author info? Using entries similar to javadocs has some merit even in the shell context, IMHO, but I might be wrong.

回答1:

I'd extend Norman's answer to 6 lines, and the last of those is blank:

#!/bin/ksh
#
# @(#)$Id$
#
# Purpose
 

The third line is a version control identification string - it is actually a hybrid with an SCCS marker '@(#)' that can be identified by the (SCCS) program what and an RCS version string which is expanded when the file is put under RCS, the default VCS I use for my private use. The RCS program ident picks up the expanded form of $Id$, which might look like $Id: mkscript.sh,v 2.3 2005/05/20 21:06:35 jleffler Exp $. The fifth line reminds me that the script should have a description of its purpose at the top; I replace the word with an actual description of the script (which is why there's no colon after it, for example).

After that, there is essentially nothing standard for a shell script. There are standard fragments that appear, but no standard fragment that appears in every script. (My discussion assumes that scripts are written in Bourne, Korn, or POSIX (Bash) shell notations. There's a whole separate discussion on why anyone putting a C Shell derivative after the #! sigil is living in sin.)

For example, this code appears in some shape or form whenever a script creates intermediate (temporary) files:

tmp=${TMPDIR:-/tmp}/prog.$$
trap "rm -f $tmp.?; exit 1" 0 1 2 3 13 15

...real work that creates temp files $tmp.1, $tmp.2, ...

rm -f $tmp.?
trap 0
exit 0

The first line chooses a temporary directory, defaulting to /tmp if the user did not specify an alternative ($TMPDIR is very widely recognized and is standardized by POSIX). It then creates a file name prefix including the process ID. This is not a security measure; it is a simple concurrency measure, preventing multiple instances of the script from trampling on each other's data. (For security, use non-predictable file names in a non-public directory.) The second line ensures that the 'rm' and 'exit' commands are executed if the shell receives any of the signals SIGHUP (1), SIGINT (2), SIGQUIT (3), SIGPIPE (13) or SIGTERM (15). The 'rm' command removes any intermediate files that match the template; the exit command ensures that the status is non-zero, indicating some sort of error. The 'trap' of 0 means that the code is also executed if the shell exits for any reason - it covers carelessness in the section marked 'real work'. The code at the end then removes any surviving temporary files, before lifting the trap on exit, and finally exits with a zero (success) status. Clearly, if you want to exit with another status, you may - just make sure you set it in a variable before running the rm and trap lines, and then use exit $exitval.

I usually use the following to remove the path and suffix from the script, so I can use $arg0 when reporting errors:

arg0=$(basename $0 .sh)

I often use a shell function to report errors:

error()
{
    echo "$arg0: $*" 1>&2
    exit 1
}

If there's only one or maybe two error exits, I don't bother with the function; if there are any more, I do because it simplifies the coding. I also create more or less elaborate functions called usage to give the summary of how to use the command - again, only if there's more than one place where it would be used.

Another fairly standard fragment is an option parsing loop, using the getopts shell built-in:

vflag=0
out=
file=
Dflag=
while getopts hvVf:o:D: flag
do
    case "$flag" in
    (h) help; exit 0;;
    (V) echo "$arg0: version $Revision$ ($Date$)"; exit 0;;
    (v) vflag=1;;
    (f) file="$OPTARG";;
    (o) out="$OPTARG";;
    (D) Dflag="$Dflag $OPTARG";;
    (*) usage;;
    esac
done
shift $(expr $OPTIND - 1)

or:

shift $(($OPTIND - 1))

The quotes around "$OPTARG" handle spaces in arguments. The Dflag is cumulative, but the notation used here loses track of spaces in arguments. There are (non-standard) ways to work around that problem, too.

The first shift notation works with any shell (or would do if I used back-ticks instead of '$(...)'. The second works in modern shells; there might even be an alternative with square brackets instead of parentheses, but this works so I've not bothered to work out what that is.

One final trick for now is that I often have both the GNU and a non-GNU version of programs around, and I want to be able to choose which I use. Many of my scripts, therefore, use variables such as:

: ${PERL:=perl}
: ${SED:=sed}

And then, when I need to invoke Perl or sed, the script uses $PERL or $SED. This helps me when something behaves differently - I can choose the operational version - or while developing the script (I can add extra debug-only options to the command without modifying the script). (See Shell parameter expansion for information on the ${VAR:=value} and related notations.)



回答2:

I use the first set of ## lines for the usage documentation. I can't remember now where I first saw this.

#!/bin/sh
## Usage: myscript [options] ARG1
##
## Options:
##   -h, --help    Display this message.
##   -n            Dry-run; only show what would be done.
##

usage() {
  [ "$*" ] && echo "$0: $*"
  sed -n '/^##/,/^$/s/^## \{0,1\}//p' "$0"
  exit 2
} 2>/dev/null

main() {
  while [ $# -gt 0 ]; do
    case $1 in
    (-n) DRY_RUN=1;;
    (-h|--help) usage 2>&1;;
    (--) shift; break;;
    (-*) usage "$1: unknown option";;
    (*) break;;
    esac
  done
  : do stuff.
}


回答3:

Any code that is going to be released in the wild should have the following short header:

# Script to turn lead into gold
# Copyright (C) 2009 Joe Q Hacker - All Rights Reserved
# Permission to copy and modify is granted under the foo license
# Last revised 1/1/2009

Keeping a change log going in code headers is a throwback from when version control systems were terribly inconvenient. A last modified date shows someone how old the script is.

If you are going to be relying on bashisms, use #!/bin/bash , not /bin/sh, as sh is the POSIX invocation of any shell. Even if /bin/sh points to bash, many features will be turned off if you run it via /bin/sh. Most Linux distributions will not take scripts that rely on bashisms, try to be portable.

To me, comments in shell scripts are sort of silly unless they read something like:

# I am not crazy, this really is the only way to do this

Shell scripting is so simple that (unless your writing a demonstration to teach someone how to do it) the code nearly always obviates itself.

Some shells don't like to be fed typed 'local' variables. I believe to this day Busybox (a common rescue shell) is one of them. Make GLOBALS_OBVIOUS instead, its much easier to read, especially when debugging via /bin/sh -x ./script.sh.

My personal preference is to let logic speak for itself and minimize work for the parser. For instance, many people might write:

if [ $i = 1 ]; then
    ... some code 
fi

Where I'd just:

[ $i = 1 ] && {
    ... some code
}

Likewise, someone might write:

if [ $i -ne 1 ]; then
   ... some code
fi

... where I'd:

[ $i = 1 ] || {
   ... some code 
}

The only time I use conventional if / then / else is if there's an else-if to throw in the mix.

A horribly insane example of very good portable shell code can be studied by just viewing the 'configure' script in most free software packages that use autoconf. I say insane because its 6300 lines of code that caters to every system known to man that has a UNIX like shell. You don't want that kind of bloat, but it is interesting to study some of the various portability hacks within.. such as being nice to those who might point /bin/sh to zsh :)

The only other advice I can give is watch your expansion in here-docs, i.e.

cat << EOF > foo.sh
   printf "%s was here" "$name"
EOF

... is going to expand $name, when you probably want to leave the variable in place. Solve this via:

  printf "%s was here" "\$name"

which will leave $name as a variable, instead of expanding it.

I also highly recommend learning how to use trap to catch signals .. and make use of those handlers as boilerplate code. Telling a running script to slow down with a simple SIGUSR1 is quite handy :)

Most new programs that I write (which are tool / command line oriented) start out as shell scripts, its a great way to prototype UNIX tools.

You might also like the SHC shell script compiler, check it out here.



回答4:

Enabling error detection makes it much easier to detect problems in the script early:

set -o errexit

Exit script on first error. That way you avoid continuing on to do something which depended on something earlier in the script, perhaps ending up with some weird system state.

set -o nounset

Treat references to unset variables as errors. Very important to avoid running things like rm -you_know_what "$var/" with an unset $var. If you know that the variable can be unset, and this is a safe situation, you can use ${var-value} to use a different value if it's unset or ${var:-value} to use a different value if it's unset or empty.

set -o noclobber

It's easy to make the mistake of inserting a > where you meant to insert <, and overwrite some file which you meant to read. If you need to clobber a file in your script, you can disable this before the relevant line and enable it again afterwards.

set -o pipefail

Use the first non-zero exit code (if any) of a set of piped command as the exit code of the full set of commands. This makes it easier to debug piped commands.

shopt -s nullglob

Avoid that your /foo/* glob is interpreted literally if there are no files matching that expression.

You can combine all of these in two lines:

set -o errexit -o nounset -o noclobber -o pipefail
shopt -s nullglob


回答5:

This is the header I use for my script shell (bash or ksh). It is a man look alike and it is used to display usage() as well.

#!/bin/ksh
#================================================================
# HEADER
#================================================================
#% SYNOPSIS
#+    ${SCRIPT_NAME} [-hv] [-o[file]] args ...
#%
#% DESCRIPTION
#%    This is a script template
#%    to start any good shell script.
#%
#% OPTIONS
#%    -o [file], --output=[file]    Set log file (default=/dev/null)
#%                                  use DEFAULT keyword to autoname file
#%                                  The default value is /dev/null.
#%    -t, --timelog                 Add timestamp to log ("+%y/%m/%d@%H:%M:%S")
#%    -x, --ignorelock              Ignore if lock file exists
#%    -h, --help                    Print this help
#%    -v, --version                 Print script information
#%
#% EXAMPLES
#%    ${SCRIPT_NAME} -o DEFAULT arg1 arg2
#%
#================================================================
#- IMPLEMENTATION
#-    version         ${SCRIPT_NAME} (www.uxora.com) 0.0.4
#-    author          Michel VONGVILAY
#-    copyright       Copyright (c) http://www.uxora.com
#-    license         GNU General Public License
#-    script_id       12345
#-
#================================================================
#  HISTORY
#     2015/03/01 : mvongvilay : Script creation
#     2015/04/01 : mvongvilay : Add long options and improvements
# 
#================================================================
#  DEBUG OPTION
#    set -n  # Uncomment to check your syntax, without execution.
#    set -x  # Uncomment to debug this shell script
#
#================================================================
# END_OF_HEADER
#================================================================

And here is the usage functions to go with:

  #== needed variables ==#
SCRIPT_HEADSIZE=$(head -200 ${0} |grep -n "^# END_OF_HEADER" | cut -f1 -d:)
SCRIPT_NAME="$(basename ${0})"

  #== usage functions ==#
usage() { printf "Usage: "; head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#+" | sed -e "s/^#+[ ]*//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g" ; }
usagefull() { head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#[%+-]" | sed -e "s/^#[%+-]//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g" ; }
scriptinfo() { head -${SCRIPT_HEADSIZE:-99} ${0} | grep -e "^#-" | sed -e "s/^#-//g" -e "s/\${SCRIPT_NAME}/${SCRIPT_NAME}/g"; }

Here is what you should obtain:

# Display help
$ ./template.sh --help

    SYNOPSIS
    template.sh [-hv] [-o[file]] args ...

    DESCRIPTION
    This is a script template
    to start any good shell script.

    OPTIONS
    -o [file], --output=[file]    Set log file (default=/dev/null)
    use DEFAULT keyword to autoname file
    The default value is /dev/null.
    -t, --timelog                 Add timestamp to log ("+%y/%m/%d@%H:%M:%S")
    -x, --ignorelock              Ignore if lock file exists
    -h, --help                    Print this help
    -v, --version                 Print script information

    EXAMPLES
    template.sh -o DEFAULT arg1 arg2

    IMPLEMENTATION
    version         template.sh (www.uxora.com) 0.0.4
    author          Michel VONGVILAY
    copyright       Copyright (c) http://www.uxora.com
    license         GNU General Public License
    script_id       12345

# Display version info
$ ./template.sh -v

    IMPLEMENTATION
    version         template.sh (www.uxora.com) 0.0.4
    author          Michel VONGVILAY
    copyright       Copyright (c) http://www.uxora.com
    license         GNU General Public License
    script_id       12345

You can get the full script template here: http://www.uxora.com/unix/shell-script/18-shell-script-template



回答6:

My bash template is as below(set in my vim configuration):

#!/bin/bash

## DESCRIPTION: 

## AUTHOR: $USER_FULLNAME

declare -r SCRIPT_NAME=$(basename "$BASH_SOURCE" .sh)

## exit the shell(default status code: 1) after printing the message to stderr
bail() {
    echo -ne "$1" >&2
    exit ${2-1}
} 

## help message
declare -r HELP_MSG="Usage: $SCRIPT_NAME [OPTION]... [ARG]...
  -h    display this help and exit
"

## print the usage and exit the shell(default status code: 2)
usage() {
    declare status=2
    if [[ "$1" =~ ^[0-9]+$ ]]; then
        status=$1
        shift
    fi
    bail "${1}$HELP_MSG" $status
}

while getopts ":h" opt; do
    case $opt in
        h)
            usage 0
            ;;
        \?)
            usage "Invalid option: -$OPTARG \n"
            ;;
    esac
done

shift $(($OPTIND - 1))
[[ "$#" -lt 1 ]] && usage "Too few arguments\n"

#==========MAIN CODE BELOW==========


回答7:

I would suggest

#!/bin/ksh

and that's it. Heavyweight block comments for shell scripts? I get the willies.

Suggestions:

  1. Documentation should be data or code, not comments. At least a usage() function. Have a look at how ksh and the other AST tools document themselves with --man options on every command. (Can't link because the web site is down.)

  2. Declare local variables with typeset. That's what it's for. No need for nasty underscores.



回答8:

What you can do is to make a script that creates a header for a script & and have it auto open in your favorite editor. I saw a guy do that at this site:

http://code.activestate.com/recipes/577862-bash-script-to-create-a-header-for-bash-scripts/?in=lang-bash

#!/bin/bash -       
#title           :mkscript.sh
#description     :This script will make a header for a bash script.
#author          :your_name_here
#date            :20110831
#version         :0.3    
#usage           :bash mkscript.sh
#notes           :Vim and Emacs are needed to use this script.
#bash_version    :4.1.5(1)-release
#===============================================================================


回答9:

Generally, I have a few conventions I like to stick to for every script I write. I write all scripts with assumption that other people might read them.

I start every script with my header,

#!/bin/bash
# [ID LINE]
##
## FILE: [Filename]
##
## DESCRIPTION: [Description]
##
## AUTHOR: [Author]
##
## DATE: [XX_XX_XXXX.XX_XX_XX]
## 
## VERSION: [Version]
##
## USAGE: [Usage]
##

I use that date format, for easier grep/searching. I use the '[' braces to indicate text people need to enter themselves. if they occur outside a comment, I try to start them with '#['. That way if someone pastes them as is, it wont be mistaken for input or a test command. Check the usage section on a man page, to see this style as an example.

When I want to comment out a line of code, I use a single '#'. When I am doing a comment as a note, I use a double '##'. The /etc/nanorc uses that convention also. I find it helpful, to differentiate a comment that was chosen not to execute; verses a comment that was created as a note.

All my shell variables, I prefer to do in CAPS. I try to keep between 4 - 8 characters, unless otherwise necessary. The names relate, as best as possible, with their usage.

I also always exit with 0 if successful, or a 1 for errors. If the script has many different types of errors (and would actually help someone, or could be used in some code in some way), I would choose a documented sequence over 1. In general, exit codes are not as strictly enforced in the *nix world. Unfortunately I have never found a good general number scheme.

I like to process arguments in the standard manner. I always prefer getopts, to getopt. I never do some hack with 'read' commands and if statements. I also like to use the case statement, to avoid nested ifs. I use a translating script for long options, so --help means -h to getopts. I write all scripts in either bash (if acceptable) or generic sh.

I NEVER use bash interpreted symbols (or any interpreted symbol) in filenames, or any name for that matter. specifically... " ' ` $ & * # () {} [] -, I use _ for spaces.

Remember, these are just conventions. Best practice, of coarse, but sometimes you are forced outside the lines. The most important is to be consistent across and within your projects.