Is there any grep option that let's me control total number of matches but stops at first match on each file?
Example:
If I do this grep -ri --include '*.coffee' 're' .
I get this:
./app.coffee:express = require 'express'
./app.coffee:passport = require 'passport'
./app.coffee:BrowserIDStrategy = require('passport-browserid').Strategy
./app.coffee:app = express()
./config.coffee: session_secret: 'nyan cat'
And if I do grep -ri -m2 --include '*.coffee' 're' .
, I get this:
./app.coffee:config = require './config'
./app.coffee:passport = require 'passport'
But, what I really want is this output:
./app.coffee:express = require 'express'
./config.coffee: session_secret: 'nyan cat'
Doing -m1
does not work as I get this for grep -ri -m1 --include '*.coffee' 're' .
./app.coffee:express = require 'express'
Tried not using grep e.g. this find . -name '*.coffee' -exec awk '/re/ {print;exit}' {} \;
produced:
config = require './config'
session_secret: 'nyan cat'
UPDATE: As noted below the GNU grep -m
option treats counts per file whereas -m
for BSD grep treats it as global match count
I think you can just do something like
grep -ri -m1 --include '*.coffee' 're' . | head -n 2
to e.g. pick the first match from each file, and pick at most two matches total.
Note that this requires your grep
to treat -m
as a per-file match limit; GNU grep
does do this, but BSD grep
apparently treats it as a global match limit.
So, using grep
, you just need the option -l, --files-with-matches
.
All those answers about find
, awk
or shell scripts are away from the question.
I would do this in awk
instead.
find . -name \*.coffee -exec awk '/re/ {print FILENAME ":" $0;exit}' {} \;
If you didn't need to recurse, you could just do it with awk:
awk '/re/ {print FILENAME ":" $0;nextfile}' *.coffee
Or, if you're using a current enough bash, you can use globstar:
shopt -s globstar
awk '/re/ {print FILENAME ":" $0;nextfile}' **/*.coffee
find . -name \*.coffee -exec grep -m1 -i 're' {} \;
find's -exec option runs the command once for each matched file (unless you use +
instead of \;
, which makes it act like xargs).
using find and xargs.
find every .coffee files and excute -m1 grep to each of them
find . -print0 -name '*.coffee'|xargs -0 grep -m1 -ri 're'
test
without -m1
linux# find . -name '*.txt'|xargs grep -ri 'oyss'
./test1.txt:oyss
./test1.txt:oyss1
./test1.txt:oyss2
./test2.txt:oyss1
./test2.txt:oyss2
./test2.txt:oyss3
add -m1
linux# find . -name '*.txt'|xargs grep -m1 -ri 'oyss'
./test1.txt:oyss
./test2.txt:oyss1
You can do this easily in perl, and no messy cross platform issues!
use strict;
use warnings;
use autodie;
my $match = shift;
# Compile the match so it will run faster
my $match_re = qr{$match};
FILES: for my $file (@ARGV) {
open my $fh, "<", $file;
FILE: while(my $line = <$fh>) {
chomp $line;
if( $line =~ $match_re ) {
print "$file: $line\n";
last FILE;
}
}
}
The only difference is you have to use Perl style regular expressions instead of GNU style. They're not much different.
You can do the recursive part in Perl using File::Find, or use find
feed it files.
find /some/path -name '*.coffee' -print0 | xargs -0 perl /path/to/your/program