I have a job that works fine in spoon, but I need to run it from outside for that I use kitchen, but it doesn't work, it doesn't recognize my repository that is in another machine.
The code that my kitchen has is this
Kitchen.bat /dir:"dir_mi_repository" /rep:"mi_repository" /job:mijob.kjb /level:Detailed /log:C:\logpentaho\logKettle.log
pause
exit
this is part of the run
... 2020 10:14:49 AM org.apache.karaf.main.Main$KarafLockCallback lockAquired
INFO: Lock acquired. Setting startlevel to 100
2020/06/08 10:14:50 - Kitchen - Logging is at level : Detailed
2020/06/08 10:14:50 - Kitchen - Start of run.
2020/06/08 10:14:50 - RepositoriesMeta - No repositories file found in the local directory: c:\Pentaho\repositories.xml
2020/06/08 10:14:50 - RepositoriesMeta - Reading repositories XML file: C:\Pentaho\.kettle\repositories.xml
java.lang.NullPointerException
I usually create a BAT file and then schedule it by task scheduler. I do not see any login info for the repository, may be that's why it's not finding the jobs.
Below is a sample which I use for the BAT File.
"C:\data-integration\Kitchen.bat"/rep:"TEST" /job:"TEST_JOB" /dir:/TEST /user:admin /pass:admin
I am using the community edition. Hope this helps.
It is necessary to put the username and password to do it well
I had already tried putting the username and password, and it did not work, but it seems that there was a configuration error, because on the server it worked perfectly, adding the username and password.
Thank you
Related
I'm completely new to trying to implement GitLab's CI/CD pipelines, but it's been going quite well. In fact, for my ASP.NET project, if I specify a Publish Profile in the msbuild command that uses Web Deploy, it actually deploys the code successfully to the web server.
However, I'm now wanting to have the "build" job create artifacts which are uploaded to GitLab that I can then subsequently deploy. We're using a self-hosted instance of GitLab, for which I'm not an admin, but I can speak to the admin if I know what I'm asking for!
So I've configured my gitlab-ci.yml file like this:
variables:
NUGET_PATH: 'C:\Program Files\Nuget\Nuget.exe'
NUGET_SOURCES: 'https://api.nuget.org/v3/index.json'
MSBUILD_PATH: 'C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Current\Bin\msbuild.exe'
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- '& "$env:NUGET_PATH" restore ApplicationTemplate.sln -Source "$env:NUGET_SOURCES"'
- '& "$env:MSBUILD_PATH" ApplicationTemplate\ApplicationTemplate.csproj /p:DeployOnBuild=true /p:Configuration=Release /p:PublishProfile=FolderPublish.pubxml'
artifacts:
paths:
- '.\ApplicationTemplate\bin\Release\Publish\'
The output shows that this builds the code just fine, and it also seems to successfully find the artifacts for upload. However, when it uploads the artifacts, even though the request gets a 200 OK response, the process fails. Here is the log output:
So, it finds the artifacts, it attempts to upload them and even gets a 200 OK response (in contrast to the handful of similar reports of this error I've been able to find online), but it still fails due to an invalid argument.
I've already enabled verbose debugging, as you can see from the output, but I'm none the wiser. Looking at the GitLab Runner entries in the Windows Event Log on the box where the runner is hosted doesn't shed any light on things either. The total size of the artifacts is 61.1MB, so I don't think my issue is related to that.
Can anyone see from this output what's invalid? Can I identify which argument is invalid and/or why it's invalid?
Edit: Things I've tried
Specifying a value for artifacts:expire_in.
Setting artifacts:public to FALSE, since I'm using a self-hosted GitLab environment and the default value for this setting (TRUE) is not valid in such an environment.
Trying every format I can think of for the value of the artifacts:paths setting (this seems to be incredibly robust - regardless of the format I use, the Runner seems to have no problem parsing it and finding the files to upload).
Taking a cue from this question I created a new project with a very simple build job to upload a single file:
stages:
- build
build-job:
variables:
CI_DEBUG_TRACE: "true"
stage: build
script:
- echo "Test" > test.txt
artifacts:
paths:
- test.txt
About 50% of the time this job hangs on the uploading of the artifacts and I have to cancel it. The other half of the time it fails in exactly the same way as the my previous project:
After countless hours working on this, it seems that ultimately the issue was that our internal Web Application Firewall was blocking some part of the transfer of artefacts to the server, or the response back from it. With the WAF reconfigured not to block traffic from the machine running the GitLab Runner, the artefacts are successfully uploaded and the job succeeds.
This would have been significantly easier to diagnose if the logging from GitLab was better. As per my comment on this issue, it should be possible to see the content of the response from the GitLab server after uploading artefacts, even when the response code is 200.
What's strange - and made diagnosing the issue even harder - is that when I worked through the issue with the admin of our GitLab instance, digging through logs and running it in debug mode, the artefact upload process was uploading something successfully. We could see, for example, the GitLab Runner's log had been uploaded to the server. Clearly the WAF's blocking was selective and didn't block everything in both directions.
I am using Hadoop 3.2.0 and trying to run a simple application in a docker container and I have made the required configuration changes both in yarn-site.xml and container-executor.cfg to choose LinuxContainerExecutor and docker runtime.
I use the example of distributed shell in one of the hortonworks blog. https://hortonworks.com/blog/trying-containerized-applications-apache-hadoop-yarn-3-1/
The problem I face here is when the application is submitted to YARN it fails with a reason related to directory creation issue with the below error
2019-02-14 20:51:16,450 INFO distributedshell.Client: Got application
report from ASM for, appId=2, clientToAMToken=null,
appDiagnostics=Application application_1550156488785_0002 failed 2
times due to AM Container for appattempt_1550156488785_0002_000002
exited with exitCode: -1000 Failing this attempt.Diagnostics:
[2019-02-14 20:51:16.282]Application application_1550156488785_0002
initialization failed (exitCode=20) with output: main : command
provided 0 main : user is myuser main : requested yarn user is
myuser Failed to create directory
/data/yarn/local/nmPrivate/container_1550156488785_0002_02_000001.tokens/usercache/myuser
- Not a directory
I have configured yarn.nodemanager.local-dirs in yarn-site.xml and I can see the same reflected in YARN web ui localhost:8088/conf
<property>
<name>yarn.nodemanager.local-dirs</name>
<value>/data/yarn/local</value>
<final>false</final>
<source>yarn-site.xml</source>
</property>
I do not understand why is it trying to create usercache dir inside the nmPrivate directory.
Note : I have verified the permissions for myuser to the directories and also have tried clearing the directories manually as suggested in a related post. But no fruit. I do not see any additional information about container launch failure in any other logs.
How do I debug why the usercache dir is not resolved properly??
Really appreciate any help on this.
Realized that this is all because of the users the services were started with and the permissions to the directories the services work on.
After making sure the required changes are done, I am able to seamlessly run the examples and other applications..
Thanks Hadoop user community for the direction. Adding the link here for more details.
http://mail-archives.apache.org/mod_mbox/hadoop-user/201902.mbox/browser
Trying to set up Jenkins on one of my servers for the first time and think I might be missing something.
Jenkins 1.545
Phing 2.6.1
Jenkins builds give me the following output.
Building in workspace /var/www/vhosts/domain.co.uk/httpdocs
looking for '/var/www/vhosts/domain.co.uk/httpdocs/build.xml' ...
looking for '/var/www/vhosts/domain.co.uk/httpdocs/build.xml' ...
looking for 'build.xml' ...
buildfile 'build.xml' not found.
Build step 'Invoke Phing targets' marked build as failure
Finished: FAILURE
If I run my build.xml on it's own it works fine.
I'm using a custom workspace at the moment, before I tried a symlink from the default workspace to my webroot, when I did that it found the build file but failed when trying to run phing. I know it's a problem with permissions but I'm not sure exactly what.
I'm running this on a plesk web server and have tried adding the jenkins user to the psacln and psaserv groups but that didn't work either.
I use hudson but I think is the same problem.
Provide to ant job the full path (advanced settings)
${WORKSPACE}/buil.xml
Assuming the correct set of jenkins user
RUN_AS_USER=jenkins
Go to the custom workspace and
chown -R jenkins:jenkins myworkspace
if it doesn't work
chmod -R 777 myworkspace
then you will fix later.
I hope it helps.
I've tried doing a james [1] install on my amazon instance with MySQLas a back-end. I've MySQL connector mysql-connector-java-5.1.20.zip,unzipped and copied it to conf/lib and lib/ but when I start james with: $ sudo bin/james start it stops. Wrapper log shows:
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
My james-database.properties looks like this:
database.driverClassName=com.mysql.jdbc.Driverdatabase.url=jdbc:mysql://localhost:3306/jamesdatabase.username= ** user name **
database.password= ** secret **vendorAdapter.database=MYSQL openjpa.streaming=false
I didn't change anything else.but james is not work.
Any helps ,Thanks!
I've managed to get mine apache-james-3.0-beta4 working setting database.url=jdbc:mysql://127.0.0.1/james?create=true
The wiki says:
Using MySQL instead of Derby
Download the MySQL driver JAR from http://dev.mysql.com/downloads/connector/j/3.1.html, and put the JAR file into your ./conf/lib folder. Change the database settings in ./conf/database.properties to the following values:
# MySQL JDBC database properties
database.driverClassName=com.mysql.jdbc.Driver
database.url=jdbc:mysql://localhost/james
database.username=jamesuser
database.password=password_for_jamesuser
vendorAdapter.database=MYSQL
openjpa.streaming=false
To add the JAR to the classpath, edit ./bin/setenv.sh as shown here:
# Add every needed extra jar to this
CLASSPATH_PREFIX=../conf/lib/mysql-connector-java-5.1.13-bin.jar
However, their versioning seems off, and, admittedly, these directions don't work for me.
I know this reply comes a little bit late but I just ran into this issue.
According to Eric Charles answer:
The conf/lib/*.jar loading in beta4 is buggy.
You need to edit the conf/wrapper.conf and change
'wrapper.java.classpath...=../conf/lib' to
'wrapper.java.classpath...=../conf/lib/*' (add a /* after lib).
You can use a text editor or if you are using a script or something similar (Dockerfile in my case) to install James you can also edit it by going to the directory where wrapper.conf is located and execute:
sed -i "s/wrapper\.java\.classpath\.2=\.\.\/conf/wrapper\.java\.classpath\.2=\.\.\/conf\/lib\/\*/g" wrapper.conf
After this all jars in conf/lib should be loaded to the classpath next time James is started.
I just downloaded the free trial of Bamboo continuous integration server, and created the first plan with nothing but downloading the source code from the git. I have a local git repository on the bamboo machine so the git URL is pointing to a local path.
The problem is that when I run the job, it never finishes even after waiting for an hour. This is the last lines of the activity log:
07-Apr-2011 20:03:23 Checking out revision f9dc82500914333ed4bbdae5ed038771fd658c3c.
07-Apr-2011 20:03:23 Creating local git repository in '/home/bob/bamboo-home/xml-data/build-dir/DEV-DEV-1/.git'.
From the shell I can go to the directory shown in the log and see that the source code were cloned correctly to the bamboo working directory. But the job will never finish and the log will not have any more update from here. I have to manually terminate the job. Any ideas? Do I miss something?
Just a guess, since the Bamboo instance we have at work pulls from Accurev and not Git, and I've never run into this problem myself - but it may be hung because there isn't a builder defined for that plan. You might try defining a builder (even if it's one that you know will fail) just to see if it makes it to that next step.
I had very similar problem.
It's not very original solution but I just uninstalled bamboo and installed it again.. Now it works now