how to use HBase shell after starting hbase in cluster mode

429 Views Asked by At

I have three nodes, a master and two slaves (running as region servers), I initiated the hbase, it says,
starting master...starting slave1 ... starting slvae2... (zookeeper is running in the backend). Now, I did jps on each of the machines and I get:


In master node:

/usr/local/hbase$ jps
19111 HMaster
19338 Jps


In slave1 node:

/usr/local/hbase$ jps
24182 HRegionServer
24277 Jps


In slave2 node:

/usr/local/hbase$ jps
10647 HRegionServer
10696 Jps

Now, my question: Is everything fine, in the sense, are all regionservers up?
When I start the hbase shell , I get the following : what does this mean.. Does this imply any error?? I'm learning hbase pardon me if I'm too trivial in my questions...

/usr/local/hbase$ hbase shell
2018-08-14 12:56:07,482 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.6.1, rUnknown, Sun Jun  3 23:19:26 CDT 2018

hbase(main):001:0> 

Am I correct till here.. I don't want to move on ... and then come back to this error if I get stuck later..Can anyone help me say if it is the right output for hbase shell.. I didn't understand the meaning of SLF4J



When I say.. create 'test','cf' it throws me the following error:

ERROR: Can't get master address from ZooKeeper; znode data == null
1

There are 1 best solutions below

7
On

HBase master manage the whole cluster. So you can check your cluster in website: http://master:16010/master-status.