We have a Hbase Table where they rowkey is prepared by concatenating Site+Article i.e if I have site A which sells 100,200,300 article nos. My rowkeys are A100,A200,A300 respectively. Now we want to scan the hbase table using the article number only. Which can be present in multiple sites. We tried performing a scan using substring comparator. But it takes a long time. Can anyone suggest a better salting or rowkey design for the same scenario.
1
There are 1 best solutions below
Related Questions in HBASE
- Overlapping UICollectionView in storyboard
- Cannot pod spec lint because of undeclared type errors
- Is the transactionReceipt data present in dataWithContentsOfURL?
- UIWebView Screen Fitting Issue
- ZXingObjC encoding issues
- iOS: None of the valid provisioning profiles allowed the specific entitlements
- How to hide "Now playing url" in control center
- CloudKit: Preventing Duplicate Records
- Slow performance on ipad erasing image
- Swift code with multiple NSDateFormatter - optimization
Related Questions in ROW-KEY
- Overlapping UICollectionView in storyboard
- Cannot pod spec lint because of undeclared type errors
- Is the transactionReceipt data present in dataWithContentsOfURL?
- UIWebView Screen Fitting Issue
- ZXingObjC encoding issues
- iOS: None of the valid provisioning profiles allowed the specific entitlements
- How to hide "Now playing url" in control center
- CloudKit: Preventing Duplicate Records
- Slow performance on ipad erasing image
- Swift code with multiple NSDateFormatter - optimization
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
It doen't seems like this problem can be solved by simple rowkey redesign until you are able to exchange SiteId and ArticleId, but in that case you will have the same problem with searching by SiteId. The reason for such behaviour is that HBase can't optimize a search by middle or last part of keys in anyway and it has to do a full scan.
Some solutions which you might think of:
1. Do several concurrent searches one per each site with condition
rowkey == SiteIdArticleId
. This would work fast if you have relatively small number of sites.2. Do a
custom secondary index
. A second index table with AtricleId as rowkey and SiteIds as sell values.3. Use
Apache Phoenix
which can do secondary indexing out of the box. (But check that it fits to need first)In the second case you are able to perform get by key from index table and than from zero to multiple gets for each cell from the first get. This will work pretty fast, but require some space overhead.
The second option in more details:
Suppose your table colled
SiteToArticle
and the second table is colledArticleToSite
When you do writes you write to both tables to the first as you usually do and to the second like{"rowkey"=ArticleId, "SiteId"=siteId}
When you do reads, firstly you read from
ArticleToSite
, then iterate over eachSiteId
create new get with keySiteId:ArticleId
and perform the second batch of gets. Code may look approximately like this: