Does anybody know how to scan records based on some scan filter i.e.:
column:something = "somevalue"
Something like this, but from HBase shell?
Does anybody know how to scan records based on some scan filter i.e.:
column:something = "somevalue"
Something like this, but from HBase shell?
Try this. It's kind of ugly, but it works for me.
import org.apache.hadoop.hbase.filter.CompareFilter
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter
import org.apache.hadoop.hbase.filter.SubstringComparator
import org.apache.hadoop.hbase.util.Bytes
scan 't1', { COLUMNS => 'family:qualifier', FILTER =>
SingleColumnValueFilter.new
(Bytes.toBytes('family'),
Bytes.toBytes('qualifier'),
CompareFilter::CompareOp.valueOf('EQUAL'),
SubstringComparator.new('somevalue'))
}
The HBase shell will include whatever you have in ~/.irbrc, so you can put something like this in there (I'm no Ruby expert, improvements are welcome):
# imports like above
def scan_substr(table,family,qualifier,substr,*cols)
scan table, { COLUMNS => cols, FILTER =>
SingleColumnValueFilter.new
(Bytes.toBytes(family), Bytes.toBytes(qualifier),
CompareFilter::CompareOp.valueOf('EQUAL'),
SubstringComparator.new(substr)) }
end
and then you can just say in the shell:
scan_substr 't1', 'family', 'qualifier', 'somevalue', 'family:qualifier'
scan 'test', {COLUMNS => ['F'],FILTER => \
"(SingleColumnValueFilter('F','u',=,'regexstring:http:.*pdf',true,true)) AND \
(SingleColumnValueFilter('F','s',=,'binary:2',true,true))"}
More information can be found here. Note that multiple examples reside in the attached Filter Language.docx
file.
Use the FILTER param of scan
, as shown in the usage help:
hbase(main):002:0> scan
ERROR: wrong number of arguments (0 for 1)
Here is some help for this command:
Scan a table; pass table name and optionally a dictionary of scanner
specifications. Scanner specifications may include one or more of:
TIMERANGE, FILTER, LIMIT, STARTROW, STOPROW, TIMESTAMP, MAXLENGTH,
or COLUMNS. If no columns are specified, all columns will be scanned.
To scan all members of a column family, leave the qualifier empty as in
'col_family:'.
Some examples:
hbase> scan '.META.'
hbase> scan '.META.', {COLUMNS => 'info:regioninfo'}
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], LIMIT => 10, STARTROW => 'xyz'}
hbase> scan 't1', {FILTER => org.apache.hadoop.hbase.filter.ColumnPaginationFilter.new(1, 0)}
hbase> scan 't1', {COLUMNS => 'c1', TIMERANGE => [1303668804, 1303668904]}
For experts, there is an additional option -- CACHE_BLOCKS -- which
switches block caching for the scanner on (true) or off (false). By
default it is enabled. Examples:
hbase> scan 't1', {COLUMNS => ['c1', 'c2'], CACHE_BLOCKS => false}
Scan scan = new Scan();
FilterList list = new FilterList(FilterList.Operator.MUST_PASS_ALL);
//in case you have multiple SingleColumnValueFilters,
you would want the row to pass MUST_PASS_ALL conditions
or MUST_PASS_ONE condition.
SingleColumnValueFilter filter_by_name = new SingleColumnValueFilter(
Bytes.toBytes("SOME COLUMN FAMILY" ),
Bytes.toBytes("SOME COLUMN NAME"),
CompareOp.EQUAL,
Bytes.toBytes("SOME VALUE"));
filter_by_name.setFilterIfMissing(true);
//if you don't want the rows that have the column missing.
Remember that adding the column filter doesn't mean that the
rows that don't have the column will not be put into the
result set. They will be, if you don't include this statement.
list.addFilter(filter_by_name);
scan.setFilter(list);
One of the filter is Valuefilter which can be used to filter all column values.
hbase(main):067:0> scan 'dummytable', {FILTER => "ValueFilter(=,'binary:2016-01-26')"}
binary is one of the comparators used within the filter. You can use different comparators within the filter based on what you want to do.
You can refer following url: http://www.hadooptpoint.com/filters-in-hbase-shell/. It provides good examples on how to use different filters in HBase Shell.
Add setFilterIfMissing(true) at the end of query
hbase(main):009:0> import org.apache.hadoop.hbase.util.Bytes;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.filter.BinaryComparator;
import org.apache.hadoop.hbase.filter.CompareFilter;
import org.apache.hadoop.hbase.filter. Filter;
scan 'test:test8', { FILTER => SingleColumnValueFilter.new(Bytes.toBytes('account'),
Bytes.toBytes('ACCOUNT_NUMBER'), CompareFilter::CompareOp.valueOf('EQUAL'),
BinaryComparator.new(Bytes.toBytes('0003000587'))).setFilterIfMissing(true)}