Bash - Search and Replace operation with reporting

2020-02-06 17:42发布

I have a input file "test.txt" as below -

hostname=abc.com hostname=xyz.com
db-host=abc.com db-host=xyz.com

In each line, the value before space is the old value which needs to be replaced by the new value after the space recursively in a folder named "test". I am able to do this using below shell script.

#!/bin/bash

IFS=$'\n' 
for f in `cat test.txt`
do
  OLD=$(echo $f| cut -d ' ' -f 1) 
  echo "Old = $OLD"
  NEW=$(echo $f| cut -d ' ' -f 2)
  echo "New = $NEW"
  find test -type f | xargs sed -i.bak "s/$OLD/$NEW/g"
done

"sed" replaces the strings on the fly in 100s of files.

Is there a trick or an alternative way by which i can get a report of the files changed like absolute path of the file & the exact lines that got changed ?

PS - I understand that sed or stream editors doesn't support this functionality out of the box. I don't want to use versioning as it will be an overkill for this task.

3条回答
Anthone
2楼-- · 2020-02-06 18:00

From man sed:

   -i[SUFFIX], --in-place[=SUFFIX]
          edit files in place (makes backup if SUFFIX supplied)

This can be used to create a backup file when replacing. You can then look for any backup files, which indicate which files were changed, and diff those with the originals. Once you're done inspecting the diff, simply remove the backup files.

If you formulate your replacements as sed statements rather than a custom format you can go one further, and use either a sed shebang line or pass the file to -f/--file to do all the replacements in one operation.

查看更多
Bombasti
3楼-- · 2020-02-06 18:10

There's several problems with your script, just replace it all with (using GNU awk instead of GNU sed for inplace editing):

mapfile -t files < <(find test -type f)
awk -i inplace '
NR==FNR { map[$1] = $2; next }
{ for (old in map) gsub(old,map[old]) }
' test.txt "${files[@]}"

You'll find that is orders of magnitude faster than what you were doing.

That still has the issue your existing script does of failing when the "test.txt" strings contain regexp or backreference metacharacters and modifying previously-modified strings and handling partial matches - if that's an issue let us know as it's easy to work around with awk (and extremely difficult with sed!).

To get whatever kind of report you want you just tweak the { for ... } line to print them, e.g. to print a record of the changes to stderr:

mapfile -t files < <(find test -type f)
awk -i inplace '
NR==FNR { map[$1] = $2; next }
{
    orig = $0
    for (old in map) {
        gsub(old,map[old])
    }
    if ($0 != orig) {
        printf "File %s, line %d: \"%s\" became \"%s\"\n", FILENAME, FNR, orig, $0 | "cat>&2"
    }
}
' test.txt "${files[@]}"
查看更多
地球回转人心会变
4楼-- · 2020-02-06 18:13

Let's start with a simple rewrite of your script, to make it a little bit more robust at handling a wider range of replacement values, but also faster:

#!/bin/bash

# escape regexp and replacement strings for sed
escapeRegex() { sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$1"; }
escapeSubst() { sed 's/[&/\]/\\&/g' <<<"$1"; }

while read -r old new; do
    find test -type f -exec sed "/$(escapeRegex "$old")/$(escapeSubst "$new")/g" -i '{}' \;
done <test.txt

So, we loop over pairs of whitespace-separated fields (old, new) in lines from test.txt and run a standard sed in-place replace on all files found with find.

Pretty similar to your script, but we properly read lines from test.txt (no word splitting, pathname/variable expansion, etc.), we use Bash builtins whenever possible (no need to call external tools like cat, cut, xargs); and we escape sed metacharacters in old/new values for proper use as sed's regexp and replacement expressions.

Now let's add logging from sed:

#!/bin/bash

# escape regexp and replacement strings for sed
escapeRegex() { sed 's/[^^]/[&]/g; s/\^/\\^/g' <<<"$1"; }
escapeSubst() { sed 's/[&/\]/\\&/g' <<<"$1"; }

while read -r old new; do
    find test -type f -printf '\n[%p]\n' -exec sed "/$(escapeRegex "$old")/{
        h
        s//$(escapeSubst "$new")/g
        H
        x
        s/\n/ --> /
        w /dev/stdout
        x
    }" -i '{}' > >(tee -a change.log) \;
done <test.txt

The sed script above changes each old to new, but it also writes old --> new line to /dev/stdout (Bash-specific), which we in turn append to change.log file. The -printf action in find outputs a "header" line with file name, for each file processed.

With this, your "change log" will look something like:

[file1]
hostname=abc.com --> hostname=xyz.com

[file2]

[file1]
db-host=abc.com --> db-host=xyz.com

[file2]
db-host=abc.com --> db-host=xyz.com

Just for completeness, a quick walk-through the sed script. We act only on lines containing the old value. For each such line, we store it to hold space (h), change it to new, append that new value to the hold space (joined with newline, H) which now holds old\nnew. We swap hold with pattern space (x), so we can run s command that converts it to old --> new. After writing that to the stdout with w, we move the new back from hold to pattern space, so it gets written (in-place) to the file processed.

查看更多
登录 后发表回答