I am trying to upload file to remote machine with Deployment feature, but it corrupts them, filling with zero bytes.
What can be wrong?
This is happening with only one remote machine. Deploying onto another ones goes normal. So, it is some interference between machine config and Jetbrains config/bugs.
SFTP put works fine on the same machine.
If I create Russian characters directory in /root folder, I get the following error when I am listing /root folder:
2019-06-24 17:06:49,443 [ 25890] DEBUG - ins.plugins.webDeployment.sftp - cd "/"
2019-06-24 17:06:49,459 [ 25906] DEBUG - ins.plugins.webDeployment.sftp - stat "root"
2019-06-24 17:06:49,467 [ 25914] DEBUG - ins.plugins.webDeployment.sftp - drwx------ 0 0 4096 Mon Jun 24 17:05:39 MSK 2019, mtime 1,561,385,139
2019-06-24 17:06:49,469 [ 25916] WARN - i.remotebrowser.ServerTreeNode - Could not list the contents of folder "sftp://cmnanny/root".
org.apache.commons.vfs2.FileSystemException: Could not list the contents of folder "sftp://cmnanny/root".
at org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:1101)
at com.jetbrains.plugins.webDeployment.DeploymentPathUtils.getChildren(DeploymentPathUtils.java:373)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode$1.compute(ServerTreeNode.java:250)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode$1.compute(ServerTreeNode.java:247)
at com.jetbrains.plugins.webDeployment.connections.RemoteConnectionPool$RemoteConnectionImpl.executeServerOperation(RemoteConnectionPool.java:141)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode.getChildren(ServerTreeNode.java:247)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode.createChildren(ServerTreeNode.java:206)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode.loadChildren(ServerTreeNode.java:166)
at com.jetbrains.plugins.webDeployment.ui.remotebrowser.ServerTreeNode.lambda$getChildren$0(ServerTreeNode.java:157)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:311)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: 4: Failure
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2873)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1633)
at com.jcraft.jsch.ChannelSftp.ls(ChannelSftp.java:1553)
at com.jetbrains.plugins.webDeployment.config.LoggingSftpChannel.ls(LoggingSftpChannel.java:215)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.doListChildrenResolved(SftpFileObject.java:495)
at org.apache.commons.vfs2.provider.AbstractFileObject.getChildren(AbstractFileObject.java:1091)
The machine host name contains special chars like # and &, may be they are confusing CLion parser?
Related
can some one tell what below lines indicates. I am using tomcat7 its suddenly goes down after some time . i can see below log in /var/log/messages file. is jvm got crashed ?
Dec 28 17:39:06 track03 abrt[23595]: Saved core dump of pid 9849 (/usr/java/jdk1.6.0_27/bin/java) to /var/spool/abrt/ccpp-2016-12-28-17:38:14-9849 (12624896000 bytes)
Dec 28 17:39:06 track03 abrtd: Directory 'ccpp-2016-12-28-17:38:14-9849' creation detected
Dec 28 17:39:07 track03 abrtd: Package 'jdk' isn't signed with proper key
Dec 28 17:39:07 track03 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2016-12-28-17:38:14-9849' exited with 1
Dec 28 17:39:07 track03 abrtd: Deleting problem directory '/var/spool/abrt/ccpp-2016-12-28-17:38:14-9849'
That means process /usr/java/jdk1.6.0_27/bin/java crashed and core dump was saved to directory /var/spool/abrt/ccpp-2016-12-28-17:38:14-9849. But abrtd daemon recognized that package jdk was not signed with GPG key so directory was deleted.
For some more information check:
ABRT logs messages with "Package isn't signed with proper key."
Red Hat Linux error - Package 'rmmagent' isn't signed with proper key
If you want to inspect core dumps you can change default behavior of abrtd in case of third-party packages:
Edit the file /etc/abrt/abrt-action-save-package-data.conf
Set OpenGPGCheck = no
Reload abrtd with the command: service abrtd reload
I am trying to get nutch 1.11 to execute a crawl. I am using cygwin to run these commands in windows 7.
Nutch is running, I am getting results from running bin/nutch, but I keep getting error messages when I try to run a crawl.
I am getting the following error when I try to run a crawl execute with nutch:
Error running: /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl/crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/seed.txt
Failed with exit value 127.
I have my JAVA_HOME classpath set, and I have altered the host file to include the 127.0.0.1 as the localhost.
I am curious if I am calling the write directory correctly, if maybe that is the problem.
The full printout looks like:
User5#User5-PC /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local
$ bin/crawl -i -D solr.server.url=http://localhost:8983/solr/ C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/ TestCrawl/ 2
Injecting seed URLs
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl//crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/
Injector: starting at 2015-12-23 17:48:21
Injector: crawlDb: TestCrawl/crawldb
Injector: urlDir: C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls
Injector: Converting injected urls to crawl db entries.
Injector: java.lang.NullPointerException
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1012)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:445)
at org.apache.hadoop.util.Shell.run(Shell.java:418)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:650)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:739)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:722)
at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:633)
at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:421)
at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:281)
at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:125)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:348)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.nutch.crawl.Injector.inject(Injector.java:323)
at org.apache.nutch.crawl.Injector.run(Injector.java:379)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
Error running:
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/bin/nutch inject TestCrawl//crawldb C:/Users/User5/Documents/Nutch/apache-nutch-1.11/runtime/local/urls/
Failed with exit value 127.
The hadoop log that I think may have something to do with the error I am getting is:
2016-01-07 12:24:40,360 ERROR util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:318)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:333)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:326)
at org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:432)
at org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:478)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)
at org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
2016-01-07 12:24:40,450 ERROR crawl.Injector - Injector: java.lang.IllegalArgumentException: java.net.URISyntaxException: Illegal character in scheme name at index 15: solr.server.url=http://localhost:8983/solr
at org.apache.hadoop.fs.Path.initialize(Path.java:206)
at org.apache.hadoop.fs.Path.<init>(Path.java:172)
at org.apache.nutch.crawl.Injector.run(Injector.java:379)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.crawl.Injector.main(Injector.java:369)
Caused by: java.net.URISyntaxException: Illegal character in scheme name at index 15: solr.server.url=http://localhost:8983/solr
at java.net.URI$Parser.fail(URI.java:2848)
at java.net.URI$Parser.checkChars(URI.java:3021)
at java.net.URI$Parser.parse(URI.java:3048)
at java.net.URI.<init>(URI.java:746)
at org.apache.hadoop.fs.Path.initialize(Path.java:203)
... 4 more
You are running linux commands from Cygwin and there is no C:\ path in linux systems. Correct command should be something like
/cygdrive/c/Users/User5/Documents/Nutch/apache-nutch1.11/runtime/local/bin/nutch inject TestCrawl/crawldb /cygdrive/c/Users/User5/Documents/Nutch/apache-nutch1.11/runtime/local/urls/seed.txt
You have answer to your problem in this message:
2016-01-07 12:24:40,360 ERROR util.Shell - Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
This is happening because hadoop version included with nutch 1.11 is designed to work in linux out of the box and not on windows.
I had same situation and I ended up using nutch1.11 in ubuntu virtual box.
hadoop-core jar file is needed when you are working with nutch
with nutch 1.11 compatible hadoop-core jar is 0.20.0
please download jar from this link :
http://www.java2s.com/Code/Jar/h/Downloadhadoop0200corejar.htm
paste that jar into "C:\cygwin64\home\apache-nutch-1.11\lib" folder
and it will run successfully.
The problem is pretty clear. According to your hadoop log, it cannnot find the winutils.exe file. Include winutils.exe in %HADOOP_HOME%/bin folder
When i try to start node agent in my glassfish app server via putty i got the following warning
Apr 25, 2014 5:03:03 AM com.sun.enterprise.admin.server.core.channel.RMIClient warn
WARNING: channel.client_init_error
Apr 25, 2014 5:03:03 AM com.sun.enterprise.admin.server.core.channel.RMIClient warn
WARNING: channel.client_init_error
and finally "CLI137 Command start-node-agent failed." a timeout.
The log file details are
2014-04-25T05:03:04.388-0500|WARNING|sun-appserver2.1|javax.enterprise.system.tools.admin|_ThreadID=10;_ThreadName=main;|ADM5801:Admin server channel crea
tion failed.|#]
[#|2014-04-25T05:03:04.396-0500|SEVERE|sun-appserver2.1|javax.ee.enterprise.system.nodeagent|_ThreadID=10;_ThreadName=main;|NAGT0014:Unexpected Node Agent ex
ception.
com.sun.appserv.server.ServerLifecycleException: java.lang.RuntimeException: Unable to save stub to /opt/vendor/sunone/SDK/nodeagents/ACSNA-TEST/agent/config
/admch
at com.sun.enterprise.admin.server.core.channel.AdminChannel.createRMIChannel(AdminChannel.java:111)
at com.sun.enterprise.ee.nodeagent.NodeAgentMain.startup(NodeAgentMain.java:204)
at com.sun.enterprise.ee.nodeagent.NodeAgentMain.main(NodeAgentMain.java:396)
Caused by: java.lang.RuntimeException: Unable to save stub to /opt/vendor/sunone/SDK/nodeagents/ACSNA-TEST/agent/config/admch
at com.sun.enterprise.admin.server.core.channel.AdminChannel.saveStubToFile(AdminChannel.java:354)
at com.sun.enterprise.admin.server.core.channel.AdminChannel.createRMIChannel(AdminChannel.java:107)
... 2 more
Caused by: java.io.FileNotFoundException: /opt/vendor/sunone/SDK/nodeagents/ACSNA-TEST/agent/config/admch (Permission denied)
at java.io.FileOutputStream.open(Native Method)
at java.io.FileOutputStream.<init>(FileOutputStream.java:179)
at java.io.FileOutputStream.<init>(FileOutputStream.java:131)
at com.sun.enterprise.admin.server.core.channel.AdminChannel.saveStubToFile(AdminChannel.java:348)
... 3 more
I am unable to figure out what excatly the issue is. I have got my permissions right. Please provide some inputs on this issue.
There IS obviously a permission problem. Make sure that the user your GF runs with has the permission to create a file in this path. Try to touch a file in this path in a shell.
Check that every directory in the path /opt/vendor/sunone/SDK/nodeagents/ACSNA-TEST/agent/config/admch has the execute right set for the GF user or its group (chmod +x).
I have encountered a problem with running Jboss as service on Fedora. Here is the log I have after using command: systemctl status jboss-as.service
Here is the log I have been receiving:
jboss-as.service - SYSV: JBoss AS Standalone
Loaded: loaded (/etc/rc.d/init.d/jboss-as)
Active: failed (Result: resources) since Thu 2014-01-16 09:31:54 CET; 46min ago
Process: 501 ExecStart=/etc/rc.d/init.d/jboss-as start (code=exited, status=0/SUCCESS)
Jan 16 09:31:22 servername.domain systemd[1]: Starting SYSV: JBoss AS Standalone...
Jan 16 09:31:23 servername.domain jboss-as[501]: Starting jboss-as: chown: missing operand after ‘/var/run/jboss-as’
Jan 16 09:31:23 servername.domain jboss-as[501]: Try 'chown --help' for more information.
Jan 16 09:31:54 servername.domain jboss-as[501]: [ OK ]
Jan 16 09:31:54 servername.domain systemd[1]: PID file /var/run/jboss-as/jboss-as-standalone.pid not readable (yet?) after start.
Jan 16 09:31:54 servername.domain systemd[1]: Failed to start SYSV: JBoss AS Standalone.
Jan 16 09:31:54 servername.domain systemd[1]: Unit jboss-as.service entered failed state.
First, I tried to find a solution for the chown: missin operand after ... problem and found something: here but it did not help. And also, I was looking for the answer for the PID file problem but it does not even exist in in the location: var/run/jboss-as/
This is because the startup script uses the variable $JBOSS_USER but it is not defined inside the script.
Please put in the file /etc/jboss-as/jboss-as.conf the following line:
JBOSS_USER=root
(change the root with other dedicated linux user e.g. jboss-as)
It looks like the service startup script expects to be able to write to the /var/run/jboss-as directory but doesn't have permissions to do so.
In your place I'd ensure that this directory is owned by the user that runs JBoss and that it is writable.
Check that there aren't other errors (particularly missing or incorrect paths) in your /etc/rc.d/init.d/jboss-as file (I assume you copied it from the jboss install folder to create a startup script.
I had the same issue until I fixed a completely unrelated link in that script, then it went away.
In Centos 7, if you straight way copying the jboss-as-standalone.sh in /etc/rc.d/init.d/, ensure JBOSS_CONF and JBOSS_HOME path is correct.
For me, it was with systemd. When I set the service y put wrong the PID File.
Example:
In the service was like
/var/run/jboss-as/jboss-as-standalone.pid
But in the script was like
/var/run/jboss-as/jboss-as.pid
I'm new to Cloudbees and having some difficulties getting my app deployed.
My app works in my local when using bees run. It also works when I placed it in another tomcat as Tomcat\webapps\ROOT.
In case it matters, I've added a lib:
C:\Java\cloudbees-sdk-1.3.1\biblenav\webapp\WEB-INF\lib\urlrewritefilter-4.0.3.jar
My war file is 635 KB and I'm using the free account.
I've tried to deploy using bees deploy from C:\Java\cloudbees-sdk-1.3.1\biblenav\
and I've tried to deploy the war file from the bees root dir. Both times I get the error below. I've no idea what to do about it. Can anyone help? Thanks!
C:\Java\cloudbees-sdk-1.3.1>bees app:deploy -a angelwarrior/biblenav ./biblenav/webapp/biblenav.war
Deploying application angelwarrior/biblenav (environment: ): .\biblenav\webapp\biblenav.war
........................uploaded 25%
........................uploaded 50%
........................uploaded 75%
........................upload completed
deploying application to server(s)...
Apr 25, 2013 11:25:23 PM com.cloudbees.api.BeesClient applicationDeployArchive
SEVERE: Invalid application deployment response: angelwarrior/biblenav
com.cloudbees.api.BeesClientException: Server.InternalError - java.lang.IllegalArgumentException: Platform error - {{invalid_local_plugin_dir,"/etc/genapp/plugins.d/jar"},
[{genapp_plugin,validate_plugin_dir,1},
{genapp_plugin,new,1},
{genapp_deploy,resolve_plugin,2},
{genapp_deploy,apply_stages,2},
{genapp_deploy,handle_task,1},
{e2_task,dispatch_handle_task,1},
{e2_service,dispatch_info,2},
{gen_server,handle_msg,5}]}
at com.cloudbees.api.BeesClient.readResponse(BeesClient.java:1121)
at com.cloudbees.api.BeesClient.applicationDeployArchive(BeesClient.java:638)
at com.cloudbees.sdk.commands.app.ApplicationDeploy.execute(ApplicationDeploy.java:322)
at com.cloudbees.sdk.commands.Command.run(Command.java:167)
at com.cloudbees.sdk.commands.Command.run(Command.java:80)
at com.cloudbees.sdk.Bees.run(Bees.java:117)
at com.cloudbees.sdk.Bees.main(Bees.java:308)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at com.cloudbees.sdk.boot.Launcher.main(Launcher.java:35)
That's an unfortunately obfuscated error message, but based on on the details, it looks like you attempted to deploy the app at some point using a "-t jar" flag. This sets your runtime stack to "jar", which isn't a known stack...which led to this downstream error.
You can see a list of valid stack names to use with that -t STACK flag in the CloudBees ClickStack docs.
In your case, it sounds like you'd like to run the deployed app package with Tomcat, so you probably want one of the following commands:
For Tomcat 6:
bees app:deploy -t tomcat -a APPID WAR_FILE
For Tomcat 7:
bees app:deploy -t tomcat7 -a APPID WAR_FILE
For JBoss 7:
bees app:deploy -t jboss -a APPID WAR_FILE
Note: once you set the stack with -t, it is sticky, so you don't need to specify on subsequent deployments.
This was a confusing error for you to see, so we'll also look at cleaning up that error to be more clear.