I have one jenkins job.
My first configuration stores the last 60 builds.
After 32 builds I get following message:
Build execution is suspended due to the following reason(s):
Your total DEV#Cloud disk usage is over your subscription's quota. Your subscription Free allows 2 GB, but you are using 2052 MB across all services (Forge and Jenkins). To fix this, you can either upgrade your subscription or delete some data in your Forge repositories, Jenkins workspaces or build artifacts.
Ok, the build artefacts are to big.
Now I configured the jenkins job to store 60builds and only 3 artefacts.
Where can I find the (old) build artefacts?
Where can I delete them?
You can manually delete build artifacts by deleting builds. This can be achieved by selecting a build from build history and then deleting it with the "delete this build" link. This is quite cumbersome, so a better solution is to go to build config and do the following: check the "discard old builds" checkbox, click "Advanced" button, put a suitable value to either the "days to keep artifacts" or "max # of builds to keep with artifacts".
You could also install the disk usage plugin, which gives you information on how much space your jobs are taking.
Here's a wiki article about managing disk usage on DEV#cloud.
Related
We are using Bamboo to build our code, create artifacts, and deploy.
Problem Scenario
I have a plan that has a stage with 3 jobs (dev/test/prod). The jobs build the code and publish a 16-20Mb Artifact as a shared artifact. When I run this plan, the publish takes 8-9 minutes for all 3 jobs. The publish is happening at approximately the same timestamp for all 3 jobs.
Here is an example log statement:
simple 10-Sep-2021 13:46:15 Publishing an artifact: Preview Artifact
simple 10-Sep-2021 13:55:09 Finished publishing of artifact Required shared artifact: [Preview Artifact], pattern: [**/Artifact.*.zip] in 8.897 min
I went onto the build server (Windows Server 2012) and viewed the artifact file in the work directory and in the artifacts directory. They are indeed almost 9 minutes apart with file timestamps.
This is very consistent. I can view many previous builds and it is consistently taking 8 or 9 minutes.
Fixed Scenario
I just edited the plan and disabled 2 of the jobs. Now the artifact publish step is taking a mere number of seconds:
27-Sep-2021 15:20:19 Publishing an artifact: Preview Artifact
27-Sep-2021 15:20:56 Finished publishing of artifact Required shared artifact: [Preview Artifact], pattern: [**/Artifact.*.zip] in 37.06 s
Questions
Why is the artifact publish so slow when I run concurrent jobs? What is bamboo doing during the publish job step that could take so long?
I have 20 other build plans (that do not use concurrent jobs) in which the artifact copy takes less than a minute. I have never seen this problem with any of these other plans.
I don't see anything special in the documentation, nor can I find a problem like this when I search Google and Stack Overflow. I need the artifact to be shared because I use it in a Deployment project.
EDIT:
Now that I think of it, 37 seconds is way too long as well. I just copied the file manually and it took about a second. Why is it taking so long even without concurrent jobs?
I am having a build where in pre-compilation stage nuget restore is taking ~3 minutes to restore packages from cache and so does npm.
These two restoration from caches could run in parallel but I am not clear whether this is possible using the VSTS Phases.
Each phase may use different agents. You should not assume that the state from an earlier phase is available during subsequent phases.
What I would need is a way to pass the content of packages and node_modules directories from two different phases into a third one that invokes the compiler.
Is this possible with VSTS phases?
I wouldn't do this with phases. I'd consider not doing it at all. Restoring packages (regardless of type) is an I/O bound operation -- you're not likely to get much out of parallelizing it. In fact, it may be slower. The bulk of the time spent restoring packages is either waiting for a file to download, or copying files around on disk. Downloading twice as many files just takes twice as long. Copying two files at once takes double the time. That's roughly speaking, of course -- it may be a bit faster in some cases, but it's not likely to be significantly faster for the average case.
That said, you could write a script to spin off two separate jobs and wait for them to complete. Something like this, in PowerShell:
$dotnetRestoreJob = (Start-Job -ScriptBlock { dotnet restore } ).Id
$npmRestoreJob = (Start-Job -ScriptBlock { npm install } ).Id
do {
$jobStatus = Get-Job -Id #($dotnetRestoreJob, $npmRestoreJob)
$jobStatus
Start-Sleep -Seconds 1
}
while ($jobStatus | where { $_.State -eq 'Running' })
Of course, you'd probably want to capture the output from the jobs and check for whether there was a success exit code or a failure exit code, but that's the general idea.
A real problem here wasn't that VSTS hosted agent npm install and nuget restore could not have been run in parallel on a hosted agent. No.
A real problem was that hosted agent do not use nuget cache by design.
We have determined that this issue is not a bug. Hosted agent will
download nuget packages every time you queue a new build. You could
not speed this nuget restore step using a hosted agent.
https://developercommunity.visualstudio.com/content/problem/148357/nuget-restore-is-slow-on-hostedagent-2017.html
So a solution to take nuget restore time from 240s to 20s was to move it to a local agent. That way local cache do get used.
Exception Message: Unable to create the workspace '9_20_NAME' due to a mapping conflict. You may need to manually delete an old workspace. You can get a list of workspaces on a computer with the command 'tf workspaces /computer:%COMPUTERNAME%'.
Details: The path D:\Builds\NAME is already mapped in workspace 9_22_NAME. (type MappingConflictException)
Exception Stack Trace: at Microsoft.TeamFoundation.Build.Workflow.Activities.TfCreateWorkspace.Execute(CodeActivityContext context)
at System.Activities.CodeActivity`1.InternalExecute(ActivityInstance instance, ActivityExecutor executor, BookmarkManager bookmarkManager)
at System.Activities.Runtime.ActivityExecutor.ExecuteActivityWorkItem.ExecuteBody(ActivityExecutor executor, BookmarkManager bookmarkManager, Location resultLocation)
So the above has been plaguing me for just over a week now and on the surface it seems like a simple issue, delete or rename the workspaces and move on. However this issue won't shift that easily.
In short I have tried the following:
Cleared Workspaces
Created new build definitions
Moved the build folder location (e.g. D:\builds\name to D:\builds\name-2)
Build machine restart
Uninstalled / Reinstalled TFS (2013 update 3)
Rebuild the build machine and restored the TFS database
I've pretty much narrowed down the issue to something within TFS itself, but for all the good will I cannot find out what.
It's worth noting that when I delete the workspaces (using TFS sidekicks) the builds will run upto a handful of times. I've not narrowed down exactly what causes change from success to failure, however I can delete all the workspaces then run the builds a couple of times without issue and then suddenly this will come back (around 2-3 builds before constant recurring failure).
My solution was to edit my build definitions > Source Settings > Build Agent Folder and change this from a hard coded value to $(SourceDir).
A team member pointed me to this answer but I'm none the wiser as to why this setting would cause this behavior.
You will need to go to the build machine, search for the old workspace that use the same build definition name, delete that one so the build can create new workspace with the same name again. Check this blog: https://mohamedradwan.wordpress.com/2015/08/25/unable-to-create-the-workspace-due-to-a-mapping-conflict/
Also, try to rename your build definition to something unique to see whether this will fix the issue. http://blog.casavian.eu/2014/04/02/build-workspace-issue/
It seems that I've run out of room on my Master node and I need to clear some space in order to reboot my daily tests. Selenium.log is taking up a lot of space and I'm convinced its not currently being used. Would it be safe to delete?
Edit: I deleted the file and upon starting a new build Selenium created a new log file. I didn't experience any issues during this new build either.
You don't say what creates the file, or where it is, but assuming that you can already see the important details from each build in the Jenkins UI (e.g. in the console log, or in test results etc.), then you shouldn't need to keep any files that are sitting in the workspace or elsewhere.
I have a few EJBs compiled with Weblogic's EJBC complient with Weblogic 9.2.1.
Our customer uses Weblogic 9.2.3.
During server start Weblogic gives the following message:
<BEA-010087> <The EJB deployment named: YYY.jar is being recompiled within the WebLogic Server. Please consult the server logs if there are any errors. It is also possible to run weblogic.appc as a stand-alone tool to generate the required classes. The generated source files will be placed in .....>
Consequently, server start takes 1.5 hours instead of 20 min. The next server start takes exactly the same time, meaning Weblogic does not cache the products of the recompilation. Needless to say, we cannot recompile all our EJBs to 9.2.3 just for this specific customer, so we need an on-site solution.
My questions are:
1. Is there any way of telling Weblogic to leave those EJB jars as they are and avoid the re-compilation during server start?
2. Can I tell Weblogic to cache the recompiled EJBs to avoid prolonged restarts?
Our current workaround was to write a script that does this recompilation manually before the EAR's creation and deployment (by simply running java weblogic.appc <jar-name>), but we would rather avoid this solution being used in production.
I FIXED this problem by spending a great deal of time researching
and decompiling some classes.I encountered this when migrating from weblogic8 to 10
by this time you might have understood the pain in dealing with oracle weblogic tech support.
unfortunately they did not have a server configuration setting to disable this
You need to do 2 things
Step 1.You if you open the EJB jar files you can see
ejb-jar.xml=3435671213
com.mycompany.myejbs.ejb.DummyEJBService=2691629828
weblogic-ejb-jar.xml=3309609440
WLS_RELEASE_BUILD_VERSION_24=10.0.0.0
you see these hascodes for each of your ejb names.Make these hadcodes zero.
pack the jar file and deploy it on server.
com.mycompany.myejbs.ejb.DummyEJBService=0
weblogic-ejb-jar.xml=0
This is just a Marker file that weblogic.appc keeps in each ejb jar to trigger the recompilation
during server boot up.i automated this process of making these hadcodes to zero.
This hashcodes remain the same for each ejb even if you execute appc for more than once
if you add a new EJB class or delete a class those entries are added to this marker file
Note 1:
how to get this file?
if you open domains/yourdomain/servers/yourServerName/cache/EJBCompilerCache/XXXXXXXXX
you will see this file for each ejb.weblogic makes the hashcodes to zero after it recompiles
Note 2:
When you generate EJB using appc.generate them to a exploded directory using -output C:\myejb
instead of C:\myejb.jar.This way you can play around with the marker file
Step2.
Also you need a PATCH from weblogic.When you install the patch you see some message like this
"PATH CRXXXXXX installed successfully.Eliminate EJB recomilation for appc".
i dont remember the patch number but you can request weblogic for that.
You need to use both steps to fix the problem.The patch fixes only part of the problem
Goodluck!!
cheers
raj
the Marker file in EJBs is WL_GENERATED
Just to update the solution we went with - eventually we opted to recompile the EJBs once at the Customer's site instead of messing with the EJBs' internal markers (we don't want Oracle saying they cannot support problems derived from this scenario).
We created two KSH scripts - the first iterates over all the EJB jars, copies them to a temp dir and then re-compiles them in parallel by running several instances of the 2nd script which does only one thing: java -Drecompiler=yes -cp $CLASSPATH weblogic.appc $1 (With error handling of course :))
This solution reduced compilation time from 70min to 15min. After this we re-create the EAR file and redeploy it with the new EJBs. We do this once per several UAT environment creations, so we save quite a lot of time here (55min X num of envs per drop X num of drops)