Discussion:
More Replication on dfs
(too old to reply)
Puri, Aseem
2009-04-10 04:56:32 UTC
Permalink
Hi

I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.



Please tell why is it happening?



Regards,

Aseem Puri
Alex Loddengaard
2009-04-10 05:20:25 UTC
Permalink
Did you load any files when replication was set to 3? If so, you'll have to
rebalance:

<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balancer>
<http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalancer>

Note that most people run HDFS with a replication factor of 3. There have
been cases when clusters running with a replication of 2 discovered new
bugs, because replication is so often set to 3. That said, if you can do
it, it's probably advisable to run with a replication factor of 3 instead of
2.

Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Mithila Nagendra
2009-04-10 05:26:05 UTC
Permalink
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll have to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balancer>
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalancer
Note that most people run HDFS with a replication factor of 3. There have
been cases when clusters running with a replication of 2 discovered new
bugs, because replication is so often set to 3. That said, if you can do
it, it's probably advisable to run with a replication factor of 3 instead of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Puri, Aseem
2009-04-10 05:47:30 UTC
Permalink
Hi Alex,

Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes input
from one Datanode only. I want my files to be on different data node so
to check functionality of map reduce properly.

Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.

Please suggest what I should do now.

Regards,
Aseem Puri


-----Original Message-----
From: Mithila Nagendra [mailto:mnagendr-***@public.gmane.org]
Sent: Friday, April 10, 2009 10:56 AM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: More Replication on dfs

To add to the question, how does one decide what is the optimal
replication
factor for a cluster. For instance what would be the appropriate
replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3. There have
been cases when clusters running with a replication of 2 discovered new
bugs, because replication is so often set to 3. That said, if you can do
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Puri, Aseem
2009-04-10 06:23:23 UTC
Permalink
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.

Aseem

-----Original Message-----
From: Puri, Aseem [mailto:Aseem.Puri-9cVU0VQWtZkS+***@public.gmane.org]
Sent: Friday, April 10, 2009 11:18 AM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: RE: More Replication on dfs

Hi Alex,

Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes input
from one Datanode only. I want my files to be on different data node so
to check functionality of map reduce properly.

Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.

Please suggest what I should do now.

Regards,
Aseem Puri


-----Original Message-----
From: Mithila Nagendra [mailto:mnagendr-***@public.gmane.org]
Sent: Friday, April 10, 2009 10:56 AM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: More Replication on dfs

To add to the question, how does one decide what is the optimal
replication
factor for a cluster. For instance what would be the appropriate
replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3. There have
been cases when clusters running with a replication of 2 discovered new
bugs, because replication is so often set to 3. That said, if you can do
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Alex Loddengaard
2009-04-10 17:34:11 UTC
Permalink
Aseem,

How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*

I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?

Alex
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes input
from one Datanode only. I want my files to be on different data node so
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3. There
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you can
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Aaron Kimball
2009-04-10 18:16:53 UTC
Permalink
Changing the default replication in hadoop-site.xml does not affect files
already loaded into HDFS. File replication factor is controlled on a
per-file basis.

You need to use the command `hadoop fs -setrep n path...` to set the
replication factor to "n" for a particular path already present in HDFS. It
can also take a -R for recursive.

- Aaron
Post by Alex Loddengaard
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes input
from one Datanode only. I want my files to be on different data node so
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3. There
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you can
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Puri, Aseem
2009-04-11 14:00:55 UTC
Permalink
Alex,

Ouput of $ bin/hadoop fsck / command after running HBase data insert
command in a table is:

.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1


The filesystem under path '/' is HEALTHY
Please tell what is wrong.

Aseem

-----Original Message-----
From: Alex Loddengaard [mailto:alex-psgPW5cihnJWk0Htik3J/***@public.gmane.org]
Sent: Friday, April 10, 2009 11:04 PM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: More Replication on dfs

Aseem,

How are you verifying that blocks are not being replicated? Have you
ran
fsck? *bin/hadoop fsck /*

I'd be surprised if replication really wasn't happening. Can you run
fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated
blocks?"
In fact, can you just copy-paste the output of fsck?

Alex

On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes input
from one Datanode only. I want my files to be on different data node so
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you can
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Alex Loddengaard
2009-04-14 20:59:37 UTC
Permalink
Ah, I didn't realize you were using HBase. It could definitely be the case
that HBase is explicitly setting file replication to 1 for certain files.
Unfortunately I don't know enough about HBase to know if or why certain
files are set to have no replication. This might be a good question for the
HBase user list, or perhaps grepping the HBase source might tell you
something interesting. Can you confirm that the under-replicated files are
created by HBase?

Alex
Post by Puri, Aseem
Alex,
Ouput of $ bin/hadoop fsck / command after running HBase data insert
.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1
The filesystem under path '/' is HEALTHY
Please tell what is wrong.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:04 PM
Subject: Re: More Replication on dfs
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes
input
Post by Puri, Aseem
from one Datanode only. I want my files to be on different data node
so
Post by Puri, Aseem
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you
can
Post by Puri, Aseem
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property
is
Post by Puri, Aseem
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Raghu Angadi
2009-04-14 21:27:58 UTC
Permalink
Aseem,

Regd over-replication, it is mostly app related issue as Alex mentioned.

But if you are concerned about under-replicated blocks in fsck output :

These blocks should not stay under-replicated if you have enough nodes
and enough space on them (check NameNode webui).

Try grep-ing for one of the blocks in NameNode log (and datnode logs as
well, since you have just 3 nodes).

Raghu.
Post by Puri, Aseem
Alex,
Ouput of $ bin/hadoop fsck / command after running HBase data insert
.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1
The filesystem under path '/' is HEALTHY
Please tell what is wrong.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:04 PM
Subject: Re: More Replication on dfs
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes
input
Post by Puri, Aseem
from one Datanode only. I want my files to be on different data node
so
Post by Puri, Aseem
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you
can
Post by Puri, Aseem
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property
is
Post by Puri, Aseem
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Puri, Aseem
2009-04-16 05:03:05 UTC
Permalink
Hi
My problem is not that my data is under replicated. I have 3
data nodes. In my hadoop-site.xml I also set the configuration as:

<property>
<name>dfs.replication</name>
<value>2</value>
</property>

But after this also data is replicated on 3 nodes instead of two nodes.

Now, please tell what can be the problem?

Thanks & Regards
Aseem Puri



-----Original Message-----
From: Raghu Angadi [mailto:rangadi-ZXvpkYn067l8UrSeD/***@public.gmane.org]
Sent: Wednesday, April 15, 2009 2:58 AM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: More Replication on dfs

Aseem,

Regd over-replication, it is mostly app related issue as Alex mentioned.

But if you are concerned about under-replicated blocks in fsck output :

These blocks should not stay under-replicated if you have enough nodes
and enough space on them (check NameNode webui).

Try grep-ing for one of the blocks in NameNode log (and datnode logs as
well, since you have just 3 nodes).

Raghu.
Post by Puri, Aseem
Alex,
Ouput of $ bin/hadoop fsck / command after running HBase data insert
.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1
The filesystem under path '/' is HEALTHY
Please tell what is wrong.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:04 PM
Subject: Re: More Replication on dfs
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes
input
Post by Puri, Aseem
from one Datanode only. I want my files to be on different data node
so
Post by Puri, Aseem
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you
can
Post by Puri, Aseem
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property
is
Post by Puri, Aseem
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Puri, Aseem
2009-04-16 05:04:11 UTC
Permalink
Hi
My problem is not that my data is under replicated. I have 3
data nodes. In my hadoop-site.xml I also set the configuration as:

<property>
<name>dfs.replication</name>
<value>2</value>
</property>

But after this also data is replicated on 3 nodes instead of two nodes.

Now, please tell what can be the problem?

Thanks & Regards
Aseem Puri

-----Original Message-----
From: Raghu Angadi [mailto:rangadi-ZXvpkYn067l8UrSeD/***@public.gmane.org]
Sent: Wednesday, April 15, 2009 2:58 AM
To: core-user-7ArZoLwFLBtd/SJB6HiN2Ni2O/***@public.gmane.org
Subject: Re: More Replication on dfs

Aseem,

Regd over-replication, it is mostly app related issue as Alex mentioned.

But if you are concerned about under-replicated blocks in fsck output :

These blocks should not stay under-replicated if you have enough nodes
and enough space on them (check NameNode webui).

Try grep-ing for one of the blocks in NameNode log (and datnode logs as
well, since you have just 3 nodes).

Raghu.
Post by Puri, Aseem
Alex,
Ouput of $ bin/hadoop fsck / command after running HBase data insert
.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1
The filesystem under path '/' is HEALTHY
Please tell what is wrong.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:04 PM
Subject: Re: More Replication on dfs
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes
input
Post by Puri, Aseem
from one Datanode only. I want my files to be on different data node
so
Post by Puri, Aseem
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you
can
Post by Puri, Aseem
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property
is
Post by Puri, Aseem
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Aaron Kimball
2009-04-17 04:26:38 UTC
Permalink
That setting will instruct future file writes to replicate two-fold. This
has no bearing on existing files; replication can be set on a per-file
basis, so they already have their replications set in the DFS indivdually.

Use the command: bin/hadoop fs -setrep [-R] repl_factor filename...

to change the replication factor for files already in HDFS
- Aaron
Post by Puri, Aseem
Hi
My problem is not that my data is under replicated. I have 3
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
But after this also data is replicated on 3 nodes instead of two nodes.
Now, please tell what can be the problem?
Thanks & Regards
Aseem Puri
-----Original Message-----
Sent: Wednesday, April 15, 2009 2:58 AM
Subject: Re: More Replication on dfs
Aseem,
Regd over-replication, it is mostly app related issue as Alex mentioned.
These blocks should not stay under-replicated if you have enough nodes
and enough space on them (check NameNode webui).
Try grep-ing for one of the blocks in NameNode log (and datnode logs as
well, since you have just 3 nodes).
Raghu.
Post by Puri, Aseem
Alex,
Ouput of $ bin/hadoop fsck / command after running HBase data insert
.....
.....
.....
.....
.....
/hbase/test/903188508/tags/info/4897652949308499876: Under replicated
blk_-5193
695109439554521_3133. Target Replicas is 3 but found 1 replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/data: Under
replicated
blk_-1213602857020415242_3132. Target Replicas is 3 but found 1
replica(s).
.
/hbase/test/903188508/tags/mapfiles/4897652949308499876/index: Under
replicated
blk_3934493034551838567_3132. Target Replicas is 3 but found 1
replica(s).
.
/user/HadoopAdmin/hbase table.doc: Under replicated
blk_4339521803948458144_103
1. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/bin.doc: Under replicated
blk_-3661765932004150973_1030
. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file01.txt: Under replicated
blk_2744169131466786624_10
01. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/file02.txt: Under replicated
blk_2021956984317789924_10
02. Target Replicas is 3 but found 2 replica(s).
.
/user/HadoopAdmin/input/test.txt: Under replicated
blk_-3062256167060082648_100
4. Target Replicas is 3 but found 2 replica(s).
...
/user/HadoopAdmin/output/part-00000: Under replicated
blk_8908973033976428484_1
010. Target Replicas is 3 but found 2 replica(s).
Status: HEALTHY
Total size: 48510226 B
Total dirs: 492
Total files: 439 (Files currently being written: 2)
Total blocks (validated): 401 (avg. block size 120973 B) (Total
open file
blocks (not validated): 2)
Minimally replicated blocks: 401 (100.0 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 399 (99.50124 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 2
Average block replication: 1.3117207
Corrupt blocks: 0
Missing replicas: 675 (128.327 %)
Number of data-nodes: 2
Number of racks: 1
The filesystem under path '/' is HEALTHY
Please tell what is wrong.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:04 PM
Subject: Re: More Replication on dfs
Aseem,
How are you verifying that blocks are not being replicated? Have you ran
fsck? *bin/hadoop fsck /*
I'd be surprised if replication really wasn't happening. Can you run fsck
and pay attention to "Under-replicated blocks" and "Mis-replicated blocks?"
In fact, can you just copy-paste the output of fsck?
Alex
On Thu, Apr 9, 2009 at 11:23 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I also tried the command $ bin/hadoop balancer. But still the
same problem.
Aseem
-----Original Message-----
Sent: Friday, April 10, 2009 11:18 AM
Subject: RE: More Replication on dfs
Hi Alex,
Thanks for sharing your knowledge. Till now I have three
machines and I have to check the behavior of Hadoop so I want
replication factor should be 2. I started my Hadoop server with
replication factor 3. After that I upload 3 files to implement word
count program. But as my all files are stored on one machine and
replicated to other datanodes also, so my map reduce program takes
input
Post by Puri, Aseem
from one Datanode only. I want my files to be on different data node
so
Post by Puri, Aseem
to check functionality of map reduce properly.
Also before starting my Hadoop server again with replication
factor 2 I formatted all Datanodes and deleted all old data manually.
Please suggest what I should do now.
Regards,
Aseem Puri
-----Original Message-----
Sent: Friday, April 10, 2009 10:56 AM
Subject: Re: More Replication on dfs
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll
have
Post by Alex Loddengaard
to
<http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balance
Post by Puri, Aseem
Post by Puri, Aseem
r>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalanc
Post by Puri, Aseem
Post by Puri, Aseem
er
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3.
There
Post by Puri, Aseem
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered
new
Post by Alex Loddengaard
bugs, because replication is so often set to 3. That said, if you
can
Post by Puri, Aseem
do
Post by Alex Loddengaard
it, it's probably advisable to run with a replication factor of 3
instead
Post by Alex Loddengaard
of
2.
Alex
On Thu, Apr 9, 2009 at 9:56 PM, Puri, Aseem
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property
is
Post by Puri, Aseem
2
Post by Alex Loddengaard
Post by Puri, Aseem
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Alex Loddengaard
2009-04-10 17:31:43 UTC
Permalink
Mithila,

Most people run with a replication of 3. 3 replicas gets you one local
copy, one copy on a different rack, and an additional copy on that same
rack.

Quantcast gave a talk at a user group a while ago about physically moving a
data center from one colo to another. Turning off machines and moving them
increases the probability of those machines going down, so Quantcast upped
their replication to 7, I believe. Once the move was done, they lowered
their replication back to whatever it was set to previously, which I think
was 3.

So anyway, a replication factor of 3 is totally sufficient, unless you've
come across a particular case when node failure is higher than "normal," for
example maybe if you're running super unreliable hardware.

Alex
Post by Mithila Nagendra
To add to the question, how does one decide what is the optimal replication
factor for a cluster. For instance what would be the appropriate replication
factor for a cluster consisting of 5 nodes.
Mithila
Post by Alex Loddengaard
Did you load any files when replication was set to 3? If so, you'll have to
<
http://hadoop.apache.org/core/docs/r0.19.1/commands_manual.html#balancer>
Post by Alex Loddengaard
<
http://hadoop.apache.org/core/docs/r0.19.1/hdfs_user_guide.html#Rebalancer
Post by Alex Loddengaard
Note that most people run HDFS with a replication factor of 3. There
have
Post by Alex Loddengaard
been cases when clusters running with a replication of 2 discovered new
bugs, because replication is so often set to 3. That said, if you can do
it, it's probably advisable to run with a replication factor of 3 instead of
2.
Alex
Post by Puri, Aseem
Hi
I am a new Hadoop user. I have a small cluster with 3
Datanodes. In hadoop-site.xml values of dfs.replication property is 2
but then also it is replicating data on 3 machines.
Please tell why is it happening?
Regards,
Aseem Puri
Continue reading on narkive:
Loading...