My problem is when I am reading a remote and compressed file from S3, a zip file to be specific. The file is not corrupted and sometimes a get an exception and sometimes no.
I found a similar problem in this old bug still open http://jira.pentaho.com/browse/PDI-1800
This is the error that is happening. I am running this transformation on a carte server. This error is not easy to reproduce, then, unfortunately, I do not have a recipe to reproduce it.
org.pentaho.di.core.exception.KettleFileException:
Exception reading line: java.io.EOFException: Unexpected end of ZLIB input stream
Unexpected end of ZLIB input stream
Unexpected end of ZLIB input stream
at org.pentaho.di.trans.steps.fileinput.text.TextFileInputUtils.getLine(TextFileInputUtils.java:326)
at org.pentaho.di.trans.steps.fileinput.text.TextFileInputReader.tryToReadLine(TextFileInputReader.java:420)
at org.pentaho.di.trans.steps.fileinput.text.TextFileInputReader.readRow(TextFileInputReader.java:167)
at org.pentaho.di.trans.steps.fileinput.BaseFileInputStep.processRow(BaseFileInputStep.java:205)
at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException: Unexpected end of ZLIB input stream
at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
at java.util.zip.ZipInputStream.read(ZipInputStream.java:194)
at org.pentaho.di.core.compress.CompressionInputStream.read(CompressionInputStream.java:68)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:127)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112)
at java.io.InputStreamReader.read(InputStreamReader.java:168)
at org.pentaho.di.trans.steps.fileinput.text.TextFileInputUtils.getLine(TextFileInputUtils.java:294)
… 5 more
2017/12/13 12:07:40 – S3CsvInput.0 – ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Unexpected error
2017/12/13 12:07:40 – S3CsvInput.0 – ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : org.pentaho.di.core.exception.KettleFileException:
2017/12/13 12:07:40 – S3CsvInput.0 –
2017/12/13 12:07:40 – S3CsvInput.0 – Exception reading line: java.io.EOFException: Unexpected end of ZLIB input stream
2017/12/13 12:07:40 – S3CsvInput.0 – Unexpected end of ZLIB input stream
2017/12/13 12:07:40 – S3CsvInput.0 –
2017/12/13 12:07:40 – S3CsvInput.0 – Unexpected end of ZLIB input stream
2017/12/13 12:07:40 – S3CsvInput.0 –
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.steps.fileinput.text.TextFileInputUtils.getLine(TextFileInputUtils.java:326)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.steps.fileinput.text.TextFileInputReader.tryToReadLine(TextFileInputReader.java:420)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.steps.fileinput.text.TextFileInputReader.readRow(TextFileInputReader.java:167)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.steps.fileinput.BaseFileInputStep.processRow(BaseFileInputStep.java:205)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.lang.Thread.run(Thread.java:745)
2017/12/13 12:07:40 – S3CsvInput.0 – Caused by: java.io.EOFException: Unexpected end of ZLIB input stream
2017/12/13 12:07:40 – S3CsvInput.0 – at java.util.zip.InflaterInputStream.fill(InflaterInputStream.java:240)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.util.zip.InflaterInputStream.read(InflaterInputStream.java:158)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.util.zip.ZipInputStream.read(ZipInputStream.java:194)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.core.compress.CompressionInputStream.read(CompressionInputStream.java:68)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.io.BufferedInputStream.read1(BufferedInputStream.java:284)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
2017/12/13 12:07:40 – S3CsvInput.0 – at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284)
2017/12/13 12:07:40 – S3CsvInput.0 – at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326)
2017/12/13 12:07:40 – S3CsvInput.0 – at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178)
2017/12/13 12:07:40 – S3CsvInput.0 – at sun.nio.cs.StreamDecoder.read0(StreamDecoder.java:127)
2017/12/13 12:07:40 – S3CsvInput.0 – at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:112)
2017/12/13 12:07:40 – S3CsvInput.0 – at java.io.InputStreamReader.read(InputStreamReader.java:168)
2017/12/13 12:07:40 – S3CsvInput.0 – at org.pentaho.di.trans.steps.fileinput.text.TextFileInputUtils.getLine(TextFileInputUtils.java:294)
2017/12/13 12:07:40 – S3CsvInput.0 – … 5 more
child index = 56, logging object : org.pentaho.di.core.logging.LoggingObject#46345a23 parent=1ff32099-5cbe-47b3-b32c-34f1291f6c09
2017/12/13 12:07:40 – md5_field12308.0 – Finished processing (I=0, O=0, R=391170, W=782340, U=0, E=0)
2017/12/13 12:07:40 – Filter Rows field12308.0 – Finished processing (I=0, O=0, R=3, W=3, U=0, E=0)
2017/12/13 12:07:40 – Filter Rows field12315.0 – Finished processing (I=0, O=0, R=13, W=13, U=0, E=0)
2017/12/13 12:07:40 – TransLoad_cube5403_data137170 – ERROR (version 7.1.0.0-12, build 1 from 2017-05-16 17.18.02 by buildguy) : Errors detected!
It could very well be a network issue; the file isn't fully downloaded and therefore it doesn't terminate properly.
Try downloading first and reading from a local file and see if the problem persists.
Related
I'm using RMQ high available cluster with 3 nodes, version : 3.8.3
Spec :
RAM : 4GB
CPU : 2CPUs
Intermittently I'm getting following errors and some nodes are crashed with memory alarms.
Application mnesia exited with reason: stopped
wal: encountered error during recovery: badarg
Full log entries :
**********************************************************
*** Publishers will be blocked until this alarm clears ***
**********************************************************
2020-07-14 01:13:00.914 [warning] <0.328.0> rabbit_sysmon_handler busy_dist_port <0.456.0> [{name,rabbit_alarm},{initial_call,{gen_event,init_it,6}},{erlang,bif_return_trap,2},{message_queue_len,0}] {#Port<0.968>,unknown}
2020-07-14 01:13:02.838 [warning] <0.328.0> rabbit_sysmon_handler busy_dist_port <0.684.0> [{initial_call,{rabbit_prequeue,init,1}},{erts_internal,dsend_continue_trap,1},{message_queue_len,0}] {#Port<0.968>,unknown}
2020-07-14 01:31:34.457 [info] <0.8.0> Log file opened with Lager
2020-07-14 01:31:37.799 [info] <0.8.0> Feature flags: list of feature flags found:
2020-07-14 01:31:37.799 [info] <0.8.0> Feature flags: [x] drop_unroutable_metric
2020-07-14 01:31:37.799 [info] <0.8.0> Feature flags: [x] empty_basic_get_metric
2020-07-14 01:31:37.799 [info] <0.8.0> Feature flags: [x] implicit_default_bindings
2020-07-14 01:31:37.799 [info] <0.8.0> Feature flags: [x] quorum_queue
2020-07-14 01:31:37.800 [info] <0.8.0> Feature flags: [x] virtual_host_metadata
2020-07-14 01:31:37.800 [info] <0.8.0> Feature flags: feature flag states written to disk: yes
2020-07-14 01:31:37.910 [info] <0.43.0> Application mnesia exited with reason: stopped
2020-07-14 01:31:38.072 [info] <0.395.0> ra: meta data store initialised. 0 record(s) recovered
2020-07-14 01:31:38.072 [info] <0.402.0> WAL: recovering ["/var/lib/rabbitmq/mnesia/rabbit#rmq-3/quorum/rabbit#rmq-3/00000058.wal"]
2020-07-14 01:31:38.518 [warning] <0.402.0> wal: encountered error during recovery: badarg
In this time I was able to see the system iowait was high,
And I was able to see High TCP errors
What may be the possible reasons for this ?
Any help would be greatly appreciated.
Thanks.
This doesn't solve the node crashing problem, but according to this Google groups post, the wal: encountered error during recovery: badarg message in 3.8.3 can be ignored:
This error message has no impact at all and will not be printed in 3.8.4
So perhaps that line is a red herring and your problem is elsewhere.
In the firebase crash report, it's not showing any line number its coming with "???"
Am not find any answer related to this Unknown source that happening only in 8.1.0 android version
react-native-cli: 2.0.1
react-native: 0.59.1
com.facebook.react.bridge.ReadableNativeMap.getValue (Unknown Source:27)
com.facebook.react.bridge.ReadableNativeMap.getValue
com.facebook.react.bridge.ReadableNativeMap.getInt (Unknown Source:17)
com.facebook.react.g.a.a (Unknown Source:44)
com.facebook.react.modules.core.ExceptionsManagerModule.reportSoftException (Unknown Source:16)
java.lang.reflect.Method.invoke (Method.java)
com.facebook.react.bridge.JavaMethodWrapper.invoke (Unknown Source:148)
com.facebook.react.bridge.JavaModuleWrapper.invoke (Unknown Source:21)
com.facebook.react.bridge.queue.NativeRunnable.run
android.os.Handler.handleCallback (Handler.java:789)
android.os.Handler.dispatchMessage (Handler.java:98)
com.facebook.react.bridge.queue.MessageQueueThreadHandler.dispatchMessage
android.os.Looper.loop (Looper.java:164)
com.facebook.react.bridge.queue.MessageQueueThreadImpl$4.run (Unknown Source:37)
java.lang.Thread.run (Thread.java:764)
I am trying to upgrade our job from flink 1.4.2 to 1.7.1 but I keep running into timeouts after submitting the job. The flink job runs on our hadoop cluster (version 2.7) with Yarn.
I've seen the following behavior:
Using the same flink-conf.yaml as we used in 1.4.2: 1.5.6 / 1.6.3 / 1.7.1 all versions timeout while 1.4.2 works.
Using 1.5.6 with "mode: legacy" (to switch off flip-6) works
Using 1.7.1 with "mode: legacy" gives timeout (I assume this option was removed but the documentation is outdated? https://ci.apache.org/projects/flink/flink-docs-stable/ops/config.html#legacy)
When the timeout happens I get the following stacktrace:
INFO class java.time.Instant does not contain a getter for field seconds
INFO class com.bol.fin_hdp.cm1.domain.Cm1Transportable does not contain a getter for field globalId
INFO Submitting job 5af931bcef395a78b5af2b97e92dcffe (detached: false).
INFO ------------------------------------------------------------
INFO The program finished with the following exception:
INFO org.apache.flink.client.program.ProgramInvocationException: The main method caused an error.
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:545)
INFO at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:420)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:404)
INFO at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:798)
INFO at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:289)
INFO at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:215)
INFO at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1035)
INFO at org.apache.flink.client.cli.CliFrontend.lambda$main$9(CliFrontend.java:1111)
INFO at java.security.AccessController.doPrivileged(Native Method)
INFO at javax.security.auth.Subject.doAs(Subject.java:422)
INFO at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1754)
INFO at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
INFO at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1111)
INFO Caused by: java.lang.RuntimeException: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:43)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJobWithConfig(IntervalJobStarter.java:32)
INFO at com.bol.fin_hdp.Main.main(Main.java:8)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
INFO at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
INFO at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
INFO at java.lang.reflect.Method.invoke(Method.java:498)
INFO at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:528)
INFO ... 12 more
INFO Caused by: org.apache.flink.client.program.ProgramInvocationException: Could not retrieve the execution result.
INFO at org.apache.flink.client.program.rest.RestClusterClient.submitJob(RestClusterClient.java:258)
INFO at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:464)
INFO at org.apache.flink.streaming.api.environment.StreamContextEnvironment.execute(StreamContextEnvironment.java:66)
INFO at com.bol.fin_hdp.cm1.job.Job.execute(Job.java:54)
INFO at com.bol.fin_hdp.job.starter.IntervalJobStarter.startJob(IntervalJobStarter.java:41)
INFO ... 19 more
INFO Caused by: org.apache.flink.runtime.client.JobSubmissionException: Failed to submit JobGraph.
INFO at org.apache.flink.client.program.rest.RestClusterClient.lambda$submitJob$8(RestClusterClient.java:371)
INFO at java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
INFO at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:216)
INFO at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
INFO at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
INFO at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
INFO at java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1977)
INFO at org.apache.flink.runtime.rest.RestClient.lambda$submitRequest$1(RestClient.java:301)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:424)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:214)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.PromiseTask$RunnableAdapter.call(PromiseTask.java:38)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:120)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:357)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
INFO at org.apache.flink.shaded.netty4.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
INFO at java.lang.Thread.run(Thread.java:748)
INFO Caused by: org.apache.flink.runtime.concurrent.FutureUtils$RetryException: Could not complete the operation. Number of retries has been exhausted.
INFO at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$5(FutureUtils.java:213)
INFO ... 17 more
INFO Caused by: java.util.concurrent.CompletionException: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-01...
INFO at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
INFO at java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
INFO at java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:943)
INFO at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:926)
INFO ... 15 more
INFO Caused by: org.apache.flink.shaded.netty4.io.netty.channel.ConnectTimeoutException: connection timed out: shd-hdp-b-slave-017.example.com/some.ip.address:46500
INFO at org.apache.flink.shaded.netty4.io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe$1.run(AbstractNioChannel.java:212)
INFO ... 7 more
What changed in flip-6 that might cause this behavior and how can I fix this?
For our jobs on YARN w/Flink 1.6, we had to bump up the web.timeout setting via -yD web.timeout=100000.
In our case, there was a firewall between the machine submitting the job and our Hadoop cluster.
In newer Flink versions (1.7 and up) Flink uses REST to submit jobs. The port number for this REST service is random on yarn setups and could not be set.
Flink 1.8.0 introduced a config option to set this to a port or port range using:
rest.bind-port: 55520-55530
I try to install nexus 3.4.0-02-win64, but at the end exception is at the below occured, how can i resolve this issue?
I run "nexus.exe /run" command from directory .. "nexus-3.4.0-02-win64\nexus-3.4.0-02\bin>"
I have java 8 on my pc , and it is windows-64.
2017-08-08 11:49:47,125+0300 ERROR [_shutdown_waiter] *SYSTEM java.lang.Throwable - java.lang.NumberFormatException: For input string: ""
2017-08-08 11:49:47,129+0300 ERROR [_shutdown_waiter] *SYSTEM java.lang.Throwable - at java.lang.NumberFormatException.forInputString(Unknown Source)
2017-08-08 11:49:47,131+0300 ERROR [_shutdown_waiter] *SYSTEM java.lang.Throwable - at java.lang.Integer.parseInt(Unknown Source)
2017-08-08 11:49:47,134+0300 ERROR [_shutdown_waiter] *SYSTEM java.lang.Throwable - at java.lang.Integer.parseInt(Unknown Source)
2017-08-08 11:49:47,137+0300 ERROR [_shutdown_waiter] *SYSTEM java.lang.Throwable - at com.exe4j.runtime.WinLauncher$1.run(WinLauncher.java:80)
There is a point in the operation where the command window seems to be reading user input to trigger application shutdown. If I hit the enter key, then it triggers this error. If I just ignore the command window, then the application runs. And if I try typing in a number and then hitting enter, Nexus shuts itself down but does not issue any error messages.
i am getting error while formating Namenode as shown below,
[hadoop#localhost ~]$ /home/hadoop/project/hadoop-1.0.4/bin/hadoop namenood -format
Exception in thread "main" java.lang.NoClassDefFoundError: namenood
Caused by: java.lang.ClassNotFoundException: namenood
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: namenood. Program will exit.
[hadoop#localhost ~]$
EDIT:
[hadoop#localhost ~]$ /home/hadoop/project/hadoop-1.0.4/bin/hadoop namenode -format
13/02/13 11:23:47 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
13/02/13 11:23:47 INFO util.GSet: VM type = 32-bit
13/02/13 11:23:47 INFO util.GSet: 2% max memory = 19.33375 MB
13/02/13 11:23:47 INFO util.GSet: capacity = 2^22 = 4194304 entries
13/02/13 11:23:47 INFO util.GSet: recommended=4194304, actual=4194304
13/02/13 11:23:48 INFO namenode.FSNamesystem: fsOwner=hadoop
13/02/13 11:23:48 INFO namenode.FSNamesystem: supergroup=supergroup
13/02/13 11:23:48 INFO namenode.FSNamesystem: isPermissionEnabled=true
13/02/13 11:23:48 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
13/02/13 11:23:48 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
13/02/13 11:23:48 INFO namenode.NameNode: Caching file names occuring more than 10 times
13/02/13 11:23:49 ERROR namenode.NameNode: java.io.IOException: Cannot create directory /app/hadoop/tmp/dfs/name/current
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:297)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1320)
at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:1339)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1164)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1271)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
13/02/13 11:23:49 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
hopes for your suggestion
As I recall it should be
/home/hadoop/project/hadoop-1.0.4/bin/hadoop namenode -format