Is it safe to delete Selenium.log from my CentOS server? - selenium

It seems that I've run out of room on my Master node and I need to clear some space in order to reboot my daily tests. Selenium.log is taking up a lot of space and I'm convinced its not currently being used. Would it be safe to delete?
Edit: I deleted the file and upon starting a new build Selenium created a new log file. I didn't experience any issues during this new build either.

You don't say what creates the file, or where it is, but assuming that you can already see the important details from each build in the Jenkins UI (e.g. in the console log, or in test results etc.), then you shouldn't need to keep any files that are sitting in the workspace or elsewhere.

Related

Oracle VM -- Metasploitable Issue "Can't Overwrite Medium..."

Thank you for taking the time to read this. I am trying to complete a Lab for one of my classes and I am having an issues creating a new Metasploitable VM. I introduced an error when editing a file in the last one and had to delete it.
I am using a VMDK, obtained from SourceForge. The first machine installed with no issues. When trying to create a second machine I continually get "Can't overwrite medium..." comment when finishing the set-up.
So far I have tried:
Checking the destination folder for any existing files and folders (hidden too).
Redownloading Metasploitable
Assigning a new name to the VM, using different specs, etc.
Restarting my PC.
What I don't understand is that the error is stating, if I am not mistaken, that it is trying to overwrite the VMDK in my downloads file... when I am trying to save it in c:\Users\natsu\VirtualBox VMs. I have triple checked all fields before running the installer. I do not understand why this is happening, considering the first Metasploitable machine I had installed with absolutely no issues.
]

I'm having trouble with extended entities

This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.
In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.

Gulp with WinSCP - livereload and less

I am using gulp with livereload, less, and others on a remote server. I have successfully used gulp before many times, and have never experienced this scenario.
I am using WinSCP to save/edit files (I double click the file, and it opens in Sublime Text ... I save it, WinSCP automatically uploads it back to the server ).
However, when doing it this way gulp-less fails almost all the time. I have two core CSS files that are compiled - one is Bootstrap and one is my own - they should both be compiled with their own gulp tasks upon modification - but Bootstrap fails every time, and the other file fails about half of the time.
When I say fails, here's what I mean - with Bootstrap I always get the same error:
variable #grid-columns is undefined in file ....grid.less line no. 48
This happens even if I define #grid-columns as the first-heading in my main Bootstrap "import" file.
The other one fails in that livereload reloads the compiled CSS at some point prior to the Less file being compiled. This should be impossible, but it happens somehow.
However, when from the SSH command prompt, I type touch myfile.less or touch anybootstrapfile.less, everything works perfectly.
Obviously this is very annoying, but its livable. I think there must be some way to fix it though. Any ideas on what in the world could make this happen and what (if anything) I could do to fix it?
Chances are that, when WinSCP uploads the file after you save it in the internal editor, it sets a timestamp of uploaded file to older than it was before.
This might be particularly true, if you are using the FTP protocol, or Windows Vista or older.
For details, refer to WinSCP FAQ:
Why are the changes, I upload to webserver, not visible in the web browser?
Alright. With the help of Martin, who I now realize is the developer of WinScp, I have solved this. I had been working with the gulp.wait plugin and inserting a pause before the final reload - and it wasn't working.
Because... that wasn't where the problem was. The problem was happening at the time of upload; in that the file was getting 'touched' to soon (or there was something with the timestamp that ended up causing the functional equivelant).
So I moved the wait process to just before the less process was called like so:
gulp.src(src)
.pipe(plumber({
errorHandler:onError
})
.pipe(wait(500))
.pipe(less())
//etc
I'm gonna experiment, 500ms is probably longer than needed..but not too long to be painful. This solved the problem instantly.
Thanks Martin!

Atlassian Bamboo: First plan with a simple job of downloading a local git repo

I just downloaded the free trial of Bamboo continuous integration server, and created the first plan with nothing but downloading the source code from the git. I have a local git repository on the bamboo machine so the git URL is pointing to a local path.
The problem is that when I run the job, it never finishes even after waiting for an hour. This is the last lines of the activity log:
07-Apr-2011 20:03:23 Checking out revision f9dc82500914333ed4bbdae5ed038771fd658c3c.
07-Apr-2011 20:03:23 Creating local git repository in '/home/bob/bamboo-home/xml-data/build-dir/DEV-DEV-1/.git'.
From the shell I can go to the directory shown in the log and see that the source code were cloned correctly to the bamboo working directory. But the job will never finish and the log will not have any more update from here. I have to manually terminate the job. Any ideas? Do I miss something?
Just a guess, since the Bamboo instance we have at work pulls from Accurev and not Git, and I've never run into this problem myself - but it may be hung because there isn't a builder defined for that plan. You might try defining a builder (even if it's one that you know will fail) just to see if it makes it to that next step.
I had very similar problem.
It's not very original solution but I just uninstalled bamboo and installed it again.. Now it works now

Weblogic forces recompile of EJBs when migrating from 9.2.1 to 9.2.3

I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)