Discussion:
Hadoop C++ HDFS test running Exception
Andrea Barbato
2014-01-13 10:06:50 UTC
Permalink
I'm working with Hadoop 2.2.0 and trying to run this *hdfs_test.cpp*
application:

#include "hdfs.h"
int main(int argc, char **argv) {

hdfsFS fs = hdfsConnect("default", 0);
const char* writePath = "/tmp/testfile.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile) {
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, World!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) {
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);}

I compiled it but when I'm running it with *./hdfs_test* I have this:

loadFileSystems error:(unable to get stack trace for
java.lang.NoClassDefFoundError exception:
ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
kerbTicketCachePath=(NULL), userName=(NULL)) error:(unable to get
stack trace for java.lang.NoClassDefFoundError exception:
ExceptionUtils::getStackTrace error.)
hdfsOpenFile(/tmp/testfile.txt): constructNewObjectOfPath
error:(unable to get stack trace for java.lang.NoClassDefFoundError
exception: ExceptionUtils::getStackTrace error.)Failed to open
/tmp/testfile.txt for writing!

Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
and actually this is my *variable *CLASSPATH**:

echo $CLASSPATH/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar


Any help is appreciated.. thanks
Harsh J
2014-01-14 02:39:29 UTC
Permalink
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.
I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
#include "hdfs.h"
int main(int argc, char **argv) {
hdfsFS fs = hdfsConnect("default", 0);
const char* writePath = "/tmp/testfile.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
if(!writeFile) {
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, World!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) {
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}
ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
ExceptionUtils::getStackTrace error.)
ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!
Maybe is a problem with the classpath. My $HADOOP_HOME is /usr/local/hadoop
echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
Any help is appreciated.. thanks
--
Harsh J
Andrea Barbato
2014-01-15 08:54:04 UTC
Permalink
Thanks for the answer, but i can't find the client folder that contains the
files .jar: /usr/lib/hadoop/client/*.jar.
I'm using hadoop 2.2.0, can you tell me the name of this folder in that
version?
Post by Harsh J
I've found in past that the native code runtime somehow doesn't
support wildcarded classpaths. If you add the jars explicitly to the
CLASSPATH, your app will work. You could use a simple shell loop such
as at one of my random examples at
https://github.com/QwertyManiac/cdh4-libhdfs-example/blob/master/exec.sh#L3
to populate it easily instead of doing it by hand.
I'm working with Hadoop 2.2.0 and trying to run this hdfs_test.cpp
#include "hdfs.h"
int main(int argc, char **argv) {
hdfsFS fs = hdfsConnect("default", 0);
const char* writePath = "/tmp/testfile.txt";
hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT,
0, 0,
0);
if(!writeFile) {
fprintf(stderr, "Failed to open %s for writing!\n", writePath);
exit(-1);
}
char* buffer = "Hello, World!";
tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer,
strlen(buffer)+1);
if (hdfsFlush(fs, writeFile)) {
fprintf(stderr, "Failed to 'flush' %s\n", writePath);
exit(-1);
}
hdfsCloseFile(fs, writeFile);
}
ExceptionUtils::getStackTrace error.)
hdfsBuilderConnect(forceNewInstance=0, nn=default, port=0,
ExceptionUtils::getStackTrace error.)
ExceptionUtils::getStackTrace error.)
Failed to open /tmp/testfile.txt for writing!
Maybe is a problem with the classpath. My $HADOOP_HOME is
/usr/local/hadoop
echo $CLASSPATH
/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/*:/usr/local/hadoop/share/hadoop/common/*:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/*:/usr/local/hadoop/share/hadoop/hdfs/*:/usr/local/hadoop/share/hadoop/yarn/lib/*:/usr/local/hadoop/share/hadoop/yarn/*:/usr/local/hadoop/share/hadoop/mapreduce/lib/*:/usr/local/hadoop/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
Any help is appreciated.. thanks
--
Harsh J
Loading...