可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
Running zookeeper 3.3.3. I have a znode that I am just trying to list, via the CLI, as in:
ls /myznode/subznode
This crashes with an IOException in org.apache.ClientCnxn$SendThread.readLength at line 710.
Anyone seen this?? Someone suggested that maybe bad data is in the znode. Not sure if, or how... but I cannot delete it either, as it has something in it.
回答1:
I was able to work around this by increasing the max size of my listing call.
I added the "-Djute.maxbuffer=50111000" to my zkCli.sh script so that it started the client using the following line:
$JAVA "-Dzookeeper.log.dir=${ZOO_LOG_DIR}" "-Dzookeeper.root.logger=${ZOO_LOG4J_PROP}" \
"-Djute.maxbuffer=49107800" -cp "$CLASSPATH" $CLIENT_JVMFLAGS $JVMFLAGS \
org.apache.zookeeper.ZooKeeperMain "$@"
I was then able to list & use rmr /big/node
回答2:
so, the problem was that the znode in question has been overwhelmed with sub-znodes. It had about 5 million of them. Zookeeper apparently does not like this. Even worse, there is no great way to clean it up. ZK should have a prune command (or something). Thanks for the answers.
回答3:
Given the error line you mentioned,
707 void readLength() throws IOException {
708 int len = incomingBuffer.getInt();
709 if (len < 0 || len >= packetLen) {
710 throw new IOException("Packet len" + len + " is out of range!");
711 }
712 incomingBuffer = ByteBuffer.allocate(len);
713 }
it may be that your packet length is larger than what jute.maxBuffer
allows. The default value reads 4M, and that should suffice, but you may have defined the property with a sensibly lower value.
In any case, do you have a very large number of children?
回答4:
I had the same Packet out of range exception, but for a much silly reason. The port number I was specifying was for my solr instance and not the embedded zookeeper. I updated that to
bin/solr zk upconfig -z http://localhost:9983/ -n mynewconfig -d /path/to/configset where my instance is running on 8983.
Embedded zookeeper port defaults to localhost:(hostPort+1000)
Hope it helps anyone who is starting out like me.
回答5:
In my case, this error was due to a "bugged" version of SolrCloud (4.8.0), upgrading to latest (4.8.1) the problem disappeared.
回答6:
Have you tried to access it programmatically? Something like
ZooKeeper zooKeeper = new ZooKeeper(hostPort, 3000, myWatcher);
String path = "/myznode/subznode"
List<String> children = zooKeeper.getChildren(path, false);
for (String child : children) {
System.out.println(child);
}
回答7:
I had a similar problem and I was able to fix it by fixing the zookeeper ports to 2181