drop repository fails on file error - graphdb

Context: GraphDB 7.1.0
Using the openrdf-console, when requesting to drop a repository:
drop myrepo .
I get an error/exception:
[ERROR] 2016-09-13 09:44:32,369 [repositories/myrepo | o.o.h.s.ProtocolExceptionResolver] Error while handling request (500)
org.openrdf.http.server.ServerHTTPException: org.openrdf.repository.RepositoryException: Unable to clean up resources for removed repository myrepo
Caused by: java.io.IOException: Unable to delete file/nas/install/graphdb/graphdb-se-7.1.0/graphdb-se-7.1.0/data/repositories/myrepo/storage/.nfs000000016e3e49b200000006
Any further attempt to drop the repo again or to add anything to it then fails on the same error.

Apparently GraphDB tries to delete the repository directory without closing the file descriptors pointing to files in this directory.
In my case, the data directory is potentially big and lies on a NAS which is attached through NFS.
When asked to delete an opened file, a temporary .nfs000XXXis created, and it stops the remove directory command.
A workaround is to stop GraphDB, delete the repository's directory by hand and restart GraphDB.

Related

Start Node in Ignite

i want to start ignite node with a configuration name as example-igfs.xml. i have alter this configuration for using IGFS as cache layer for HDFS. but when i execute the below command for start ignite node i encounter with error:
java.lang.NoClassDefFoundError: com/google/common/base/Preconditions
at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:361)
at org.apache.hadoop.conf.Configuration$DeprecationDelta.(Configuration.java:374)
at org.apache.hadoop.conf.Configuration.(Configuration.java:456)
at org.apache.ignite.internal.processors.hadoop.impl.HadoopUtils.safeCreateConfiguration(HadoopUtils.java:334)
at org.apache.ignite.internal.processors.hadoop.impl.delegate.HadoopBasicFileSystemFactoryDelegate.start(HadoopBasicFileSystemFactoryDelegate.java:129)
java.lang.NoClassDefFoundError error usually comes when ignite can't find required libraries(Jars).
In your case, you have to move JARs to $IGNITE_HOME\libs folder.
Create a folder in libs directory, let's say hadoop-libs and move all all required JARs to this folder.
I am not expert of hadoop but it seems that you are missing hadoop client and its dependent google guava libraries.

My Debian repository is throwing a "Hash Sum mismatch" error

We maintain a Debian repository for an app and all .deb files are stored on a s3 bucket.
We wrote a script to upload the files and update the Packages.gz file. All went fine until one of the developers found deb-s3 and tried using it.
After the first package upload we started getting this error message:
W: Failed to fetch s3://s3.amazonaws.com/myapp/dists/test/main/binary-amd64/Packages Hash Sum mismatch
I've tried to restore an old version of our Packages.gz file with no success. I've searched for this error and removing the /var/lib/apt/lists/ does not work either.
What would deb-s3 do that could break our entire repo?
Looks like deb-s3 creates a Releases file under dist/test and that conflicts with Packages.gz.
Removing the Release file restored our repository back to what it was.

Failed to create database 'metastore_db', see the next exception for details

I'm getting the following exception while trying to start the hive in Ubuntu 14.04 LTS,Caused by: java.sql.SQLException: Failed to create database 'metastore_db', see the next exception for details. Hadoop installation is correct and it's working fine. Please tell me anyone what's problem?
It is because you're not in the same folder where you have created your metadata. I was facing the same problem because I was in my main user folder. When I changed the folder from main user to hduser my hive stated working.
See the mistake
I tried to find the xml file but it was not their so I searched and found where it was.
Similar to #dk14, In my case, I was in a folder on which I had no permission to write as user, moved directory and worked fine.
The reason for above error is, the user through which you are login doesn't have permission to write in that particular directory. I mean the directory in which you are running the schematool command.
For example my setup of Apache Hive was in /opt/apache-hive-3.1.2-bin I ran the command :-
sudo chown -R hadoopusr /opt/apache-hive-3.1.2-bin/
it is happening because you are on the other folder than your hive is installed.
so first of all change directory to the folder where your hive is installed and you and after that try to run hive once again.
and the hive should work properly.
Best of luck.
After spending some(lot) of time I got that issue is with creating that directory metastore_db inside DERBY_HOME/bin path was already there and I didn't had admin access for this you either:
delete that folder by using admin rights.
open hive-site.xml inside HIVE_HOME/conf path open in notepad and check connection string there change the database name to something else, it worked for me.

How to handle logs when deploying via war files

When I deploy moqui in tomcat6 by dropping in a war file, I get:
java.io.FileNotFoundException: /log/moqui.log (Permission denied)
Same with error.log.
I start tomcat with: sudo /etc/init.d/tomcat6 start
Is that where the log files should go in the production mode, and, if so, why is it getting this error? Or is there a change that I need make in configuration?
The app still runs.
These errors were caused by a bug in the code because the moqui.runtime system env var had not been set when the Logger was initialized. That is fixed in the master branch in commit #a6cc299.

Maven Error while running my selenium project in jenkins

When running the top level maven target
test
I get the following error:
FATAL: command execution failed
java.io.IOException: Cannot run program "mvn" (in directory "/var/lib/jenkins /jobs/selenium/workspace"): java.io.IOException: error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:475)
at hudson.Proc$LocalProc.<init>(Proc.java:244)
at hudson.Proc$LocalProc.<init>(Proc.java:216)
at hudson.Launcher$LocalLauncher.launch(Launcher.java:709)
at hudson.Launcher$ProcStarter.start(Launcher.java:338)
at hudson.Launcher$ProcStarter.join(Launcher.java:345)
at hudson.tasks.Maven.perform(Maven.java:263)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:19)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:717)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1502)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:46)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:236)
Caused by: java.io.IOException: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.<init>(UNIXProcess.java:164)
at java.lang.ProcessImpl.start(ProcessImpl.java:81)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:468)
... 15 more
Build step 'Invoke top-level Maven targets' marked build as failure
This seems to be an issue concerning the maven pathway, but I've setup the maven pathway on my host machine. M2_HOME, M2, and PATH are all correct. I know they are correct because I can run the maven commands from the command line. When I try to invoke maven commands in jenkins though I get the error.
So I went into Jenkins->Manage Jenkins->Configure System and I clicked on Maven installations...
I checked off on
Install automatically
Version 2.2.1
I clicked save and tried to run my project again with the same error. When I do mvn -version I get 2.2.1 so that should be right.
From the Configure System page I have also tried
Name default
MAVEN_HOME /usr/local/apache-maven/apache-maven-2.2.1
Any ideas?
The solution to my question has two parts. First I needed to make sure that after creating the maven Installation setup on the Configure System page, that I specified that same configuration in the build itself. Second Jenkins does not seem to have sufficient privileges on the redhat box I'm running it on. Once I finally got it pointed to the right maven instance I got a lot of unable to create file/folder errors. These permission errors could be the real reason I had so much trouble with maven on this machine. I have not solved these permission errors and will create a new question for them.