Is there a simple way to remove the same line of text from a folder full of text documents at the command line?
问题:
回答1:
If your version of sed allows the -i.bak
flag (edit in place):
sed -i.bak '/line of text/d' *
If not, simply put it in a bash loop:
for file in *.txt
do
sed '/line of text/d' "$file" > "$file".new_file.txt
done
回答2:
To find a pattern
and remove the line containing the pattern
below command can be used
find . -name "*" -type f | xargs sed -i -e '/<PATTERN>/d'
Example :
if you want to remove the line containing word sleep
in all the xml
files
find . -name "*.xml" -type f | xargs sed -i -e '/sleep/d'
NOTE : Be careful before choosing a pattern as it will delete the line recursively in all the files in the current directory hierarchy :)
回答3:
Consider grep -v:
for thefile in *.txt ; do
grep -v "text to remove" $thefile > $thefile.$$.tmp
mv $thefile.$$.tmp $thefile
done
Grep -v shows all lines except those that match, they go into a temp file, and then the tmpfile is moved back to the old file name.
回答4:
perl -ni -e 'print if not /mystring/' *
This tells perl to loop over your file (-n), edit in place (-i), and print the line if it does not match your regular expression.
Somewhat related, here's a handy way to perform a substitution over several files.
perl -pi -e 's/something/other/' *
回答5:
I wrote a Perl script for this:
#!/usr/bin/perl
use IO::Handle;
my $pat = shift(@ARGV) or
die("Usage: $0 pattern files\n");
die("Usage $0 pattern files\n")
unless @ARGV;
foreach my $file (@ARGV) {
my $io = new IO::Handle;
open($io, $file) or
die("Cannot read $file: $!\n");
my @file = <$io>;
close($io);
foreach my $line (@file) {
if($line =~ /$pat/o) {
$line = '';
$found = 1;
last;
}
}
if($found) {
open($io, ">$file") or
die("Cannot write $file: $!\n");
print $io @file;
close($io);
}
}
Note that it removes lines based on a regex. If you wanted to do exact match, the inner foreach
would look like:
foreach $line (@file) {
chomp $line;
if($line eq $pat) {
$line = '';
$found = 1;
last;
}
}