Discussion:
"could only be replicated to 0 nodes, instead of 1"
jerrro
2007-12-05 16:59:42 UTC
Permalink
I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for configuring
multiple slaves, and when I run start-all.sh I get no errors - both datanode
and tasktracker are reported to be running (doing ps awux | grep hadoop on
the slave nodes returns two java processes). Also, the log files are empty -
nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
I get the following error:

# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
to 0 nodes, instead of 1

and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).

I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?

Thanks.

Jerr.
--
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14175780
Sent from the Hadoop Users mailing list archive at Nabble.com.
Jason Venner
2007-12-05 17:09:04 UTC
Permalink
This happens to me, when the dfs has gotten into an inconsistent state.

NOTE: you will lose all of the contents of your HDS file system.

What I hae to do, is stop dfs, remove the contents of the dfs
directories on all the machines, hadoop namenode -format on the
controller, then restart dfs.
That consistently fixes the problem for me. This may be serious overkill
but it works.

NOTE: you will lose all of the contents of your HDS file system.
Post by jerrro
I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for configuring
multiple slaves, and when I run start-all.sh I get no errors - both datanode
and tasktracker are reported to be running (doing ps awux | grep hadoop on
the slave nodes returns two java processes). Also, the log files are empty -
nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?
Thanks.
Jerr.
jerrro
2007-12-05 17:28:57 UTC
Permalink
I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped -
Even when I stop everything, reformat and start it back again, I get this
error whenever trying to use dfs -put.
Post by Jason Venner
This happens to me, when the dfs has gotten into an inconsistent state.
NOTE: you will lose all of the contents of your HDS file system.
What I hae to do, is stop dfs, remove the contents of the dfs
directories on all the machines, hadoop namenode -format on the
controller, then restart dfs.
That consistently fixes the problem for me. This may be serious overkill
but it works.
NOTE: you will lose all of the contents of your HDS file system.
Post by jerrro
I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for configuring
multiple slaves, and when I run start-all.sh I get no errors - both datanode
and tasktracker are reported to be running (doing ps awux | grep hadoop on
the slave nodes returns two java processes). Also, the log files are empty -
nothing is printed there. Still, when I try to use bin/hadoop dfs -put,
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be replicated
to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?
Thanks.
Jerr.
--
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tf4950939.html#a14176525
Sent from the Hadoop Users mailing list archive at Nabble.com.
Hairong Kuang
2007-12-05 17:38:33 UTC
Permalink
Check http://namenode_host:50070/dfshealth.jsp to see if your cluster is
out of safemode or not and how many datanodes are up.

You could check .out/.log files under the log directory to see if there
is any error starting datanodes/namenode.

Hairong

-----Original Message-----
From: jerrro [mailto:jerrro-***@public.gmane.org]
Sent: Wednesday, December 05, 2007 9:29 AM
To: hadoop-user-PPu3vs9EauNd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: "could only be replicated to 0 nodes, instead of 1"


I did this several times, while tuning the configuration in all kinds of
way... But still, nothing helped - Even when I stop everything, reformat
and start it back again, I get this error whenever trying to use dfs
-put.
Post by Jason Venner
This happens to me, when the dfs has gotten into an inconsistent state.
NOTE: you will lose all of the contents of your HDS file system.
What I hae to do, is stop dfs, remove the contents of the dfs
directories on all the machines, hadoop namenode -format on the
controller, then restart dfs.
That consistently fixes the problem for me. This may be serious
overkill but it works.
NOTE: you will lose all of the contents of your HDS file system.
Post by jerrro
I am trying to install/configure hadoop on a cluster with several computers.
I followed exactly the instructions in the hadoop website for
configuring multiple slaves, and when I run start-all.sh I get no
errors - both datanode and tasktracker are reported to be running
(doing ps awux | grep hadoop on the slave nodes returns two java
processes). Also, the log files are empty - nothing is printed there.
Still, when I try to use bin/hadoop dfs -put, I get the following
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be
replicated to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage
to see somewhere it might mean that there are no datanodes running.
But as I said, start-all does not give any errors. Any ideas what
could be problem?
Thanks.
Jerr.
--
View this message in context:
http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-
of-1%22-tf4950939.html#a14176525
Sent from the Hadoop Users mailing list archive at Nabble.com.
Jayant Durgad
2008-04-11 00:58:40 UTC
Permalink
I am faced with the exact same problem described here, does anybody know how
to resolve this?
John Menzer
2008-04-12 21:04:00 UTC
Permalink
i had the same error message...
can you describe when and how this error occurs?
Post by Jayant Durgad
I am faced with the exact same problem described here, does anybody know how
to resolve this?
--
View this message in context: http://www.nabble.com/Re%3A-%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp16623192p16656655.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
Raghu Angadi
2008-04-11 23:22:09 UTC
Permalink
Post by jerrro
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I said,
start-all does not give any errors. Any ideas what could be problem?
start-all return does not mean datanodes are ok. Did you check if there
are any datanodes alive? You can check from http://namenode:50070/.

Raghu.
lohit
2008-04-12 21:14:07 UTC
Permalink
Can you check the datanode and namenode logs and see if all are up and running? I am assuming you are running this on single host hence replication of 1.
Thanks,
Lohit

----- Original Message ----
From: John Menzer <standard00-***@public.gmane.org>
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Sent: Saturday, April 12, 2008 2:04:00 PM
Subject: Re: "could only be replicated to 0 nodes, instead of 1"


i had the same error message...
can you describe when and how this error occurs?
Post by Jayant Durgad
I am faced with the exact same problem described here, does anybody know
how
to resolve this?
--
View this message in context: http://www.nabble.com/Re%3A-%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp16623192p16656655.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
jasongs
2008-05-08 10:30:03 UTC
Permalink
I get the same error when doing a put and my cluster is running ok

i.e. has capacity and all nodes are live.
Error message is
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/test/test.txt could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)

at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2074)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:1967)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:1487)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1601)
I would appreciate any help/suggestions

Thanks
Post by jerrro
I am trying to install/configure hadoop on a cluster with several
computers. I followed exactly the instructions in the hadoop website for
configuring multiple slaves, and when I run start-all.sh I get no errors -
both datanode and tasktracker are reported to be running (doing ps awux |
grep hadoop on the slave nodes returns two java processes). Also, the log
files are empty - nothing is printed there. Still, when I try to use
bin/hadoop dfs -put,
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be
replicated to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I
said, start-all does not give any errors. Any ideas what could be problem?
Thanks.
Jerr.
--
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p17124514.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
Hairong Kuang
2008-05-08 18:04:28 UTC
Permalink
Could you please go to the dfs webUI and check how many datanodes are up and
how much available space each has?

Hairong
Post by jasongs
I get the same error when doing a put and my cluster is running ok
i.e. has capacity and all nodes are live.
Error message is
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/test/test.txt could only be replicated to 0 nodes, instead of 1
at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1127)
at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:312)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
ava:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:409)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:901)
at org.apache.hadoop.ipc.Client.call(Client.java:512)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:198)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
ava:25)
at java.lang.reflect.Method.invoke(Method.java:585)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocation
Handler.java:82)
at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandle
r.java:59)
at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient
.java:2074)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClien
t.java:1967)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1500(DFSClient.java:148
7)
at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.jav
a:1601)
I would appreciate any help/suggestions
Thanks
Post by jerrro
I am trying to install/configure hadoop on a cluster with several
computers. I followed exactly the instructions in the hadoop website for
configuring multiple slaves, and when I run start-all.sh I get no errors -
both datanode and tasktracker are reported to be running (doing ps awux |
grep hadoop on the slave nodes returns two java processes). Also, the log
files are empty - nothing is printed there. Still, when I try to use
bin/hadoop dfs -put,
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be
replicated to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I
said, start-all does not give any errors. Any ideas what could be problem?
Thanks.
Jerr.
Arul Ganesh
2008-11-13 20:28:33 UTC
Permalink
Hi,
If you are getting this in windows environment (2003 64 bit). We have faced
the same problem. Now we tried the following steps and it started working.
1)Install cygwin and ssh.
2) Downloaded the stable version Hadoop - hadoop-0.17.2.1.tar.gz as on
13/Nov/2008
3) Untar it via cygwin (tar xvfz hadoop-0.17.2.1.tar.gz). please DONOT use
WINZIP to untar.
4) We tried running the sudo distribution example provided in quickstart
(http://hadoop.apache.org/core/docs/current/quickstart.html) and it worked.

Thanks
Arul and Limin
eBay Inc.,
Post by jerrro
I am trying to install/configure hadoop on a cluster with several
computers. I followed exactly the instructions in the hadoop website for
configuring multiple slaves, and when I run start-all.sh I get no errors -
both datanode and tasktracker are reported to be running (doing ps awux |
grep hadoop on the slave nodes returns two java processes). Also, the log
files are empty - nothing is printed there. Still, when I try to use
bin/hadoop dfs -put,
# bin/hadoop dfs -put w.txt w.txt
put: java.io.IOException: File /user/scohen/w4.txt could only be
replicated to 0 nodes, instead of 1
and a file of size 0 is created on the DFS (bin/hadoop dfs -ls shows it).
I couldn't find much information about this error, but I did manage to see
somewhere it might mean that there are no datanodes running. But as I
said, start-all does not give any errors. Any ideas what could be problem?
Thanks.
Jerr.
--
View this message in context: http://www.nabble.com/%22could-only-be-replicated-to-0-nodes%2C-instead-of-1%22-tp14175780p20488938.html
Sent from the Hadoop lucene-users mailing list archive at Nabble.com.
Loading...