bluemix java application cannot be deployed anymore - jvm

We have had a java application that's been running on bluemix for more than a year that we update periodically (a few times a week). In the last few days however, even though the build is successful, we cannot launch it. The error is the following (we never saw this before):
App/0 Error occurred during initialization of VMJul 10, 2017 12:13:14.002 PM
App/0 Could not find agent library /home/vcap/app/.java-J-buildpack/open_jdk_jre/bin/jvmkill-J-1.9.0_RELEASE in absolute path, with error: /home/vcap/app/.java-J-buildpack/open_jdk_jre/bin/jvmkill-J-1.9.0_RELEASE: cannot open shared object file: No such file or directory
The deploy cmd is
cf push "${CF_APP}" -p target/universal/myapp-SNAPSHOT.zip -b https://github.com/cloudfoundry/java-buildpack.git -k 2G

As you are able to deploy using a previous version of the buildpack, this suggests that a recent change in the buildpack may be a reason for the latest buildpack breaking.
I was going to suggest opening a ticket on the github repo, but I see you have already done that :)

Related

Nexus Repo - could not lock user prefs

I'm running Sonatype Nexus 3 inside a docker container, with the following startup command:
docker run -d -p 80:8081 --ulimit nofile=65536:65536 --name nexus -v nexus-data:/nexus-data -e INSTALL4J_ADD_VM_PARAMS="-Xms4g -Xmx4g -XX:MaxDirectMemorySize=6717m -Djava.util.prefs.userRoot=${NEXUS_DATA}/javaprefs" sonatype/nexus3
After updating the docker image version from 3.30.0 to 3.40.1, I keep getting the following warnings regarding user prefs.
2022-07-18 13:14:45,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Couldn't flush user prefs: java.util.prefs.BackingStoreException: Couldn't get file lock.
2022-07-18 13:15:15,860+0000 WARN [Timer-0] *SYSTEM java.util.prefs - Could not lock User prefs. Unix error code 2.
As you can see from the startup command, the user prefs directory is inside the docker volume and at directory /nexus-data/javaprefs . I have tried looking for existing locks inside the directory, but found none. I've also tried completely deleting the directory and saw that the warning still came up and the folder itself wasn't being created by Nexus.
I honestly don't even know if this is an important issue or not, since there is little to no documentation about the user preferences folder.
Even a way to turn off the warning log which fires every 30s would be useful.
----UPDATE----
I've tried doing a clean installation of Nexus through Docker, following the simple instructions inside the github sonatype nexus3 docker repository, and still find these warnings.
I even tried on a different OS (Windwos instead of linux, through Docker Desktop) and with and without a volume for /nexus-data.
At this point I believe it to be a bug in a newer Nexus version.
TLDR: Adding -Djava.util.prefs.userRoot=/nexus-data/javaprefs should solve the problem, assuming the nexus data directory is at /nexus-data/.
Just had the same issue after upgrading from 3.38.1 to 3.42.0. After some investigation found that indeed the java.util.prefs.userRoot property got lost somewhere between those versions. The default value in the vanilla Nexus 3.38.1 is /nexus-data/javaprefs.

Docker build fails always with error hcsshim::PrepareLayer - failed failed in Win32: Incorrect function. (0x1) Windows Containers

Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: FunciĆ³n incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.

Team city cannot copy pagefile.sys because it is being used by another process

I am trying to get TeamCity up and running for a CI / CD server. So far I have it connected to my Git repo, it pulls the repo and builds. Great.
Now I am trying to publish it. (My web server is also the CI server and agent).
I keep getting this error:
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): warning MSB3026: Could not copy "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". Beginning retry 8 in 1000ms. The process cannot access the file '\pagefile.sys' because it is being used by another process.
It ultimately fails and fails the entire publish process.
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): error MSB3027: Could not copy "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". Exceeded retry count of 10. Failed.
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): error MSB3021: Unable to copy file "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". The process cannot access the file '\pagefile.sys' because it is being used by another process.
I found this ow SO. I tried downgrading the Microsoft.CodeDom.Providers.DotNetCompilerPlatform and Microsoft.Net.Compilers packages to 1.0.0. I have even tried removing them entirely.
I have looked at all csproj files for references to these packages (including the package.config). Nothing.
I have no idea where to even begin to fix this.
My server is running Windows Server 2012 R2. I installed VS professional.
Any ideas?
I encountered the same issue.
The problem starts when you upgrade the DotNetCompilerPlatform to version 1.0.1.
To work around this issue you can downgrade to version 1.0.0 using the NuGet package manager.
EDIT: If you uninstall Microsoft.CodeDom.Providers.DotNetCompilerPlatform AND Microsoft.Net.Compilers, and then install the DotNetCompilerPlatform (has a dependency on the Microsoft.Net.Compilers package so it will automatically install that) package again the error disappears for good so it seems.
Also check this issue link - error MSB3027: Could not copy "C:\pagefile.sys" to "bin\roslyn\pagefile.sys". Exceeded retry count of 10. Failed

Crashplan on FreeNAS missing /var/lib/crashplan/.ui_info

So I spent a few weeks on this problem now. I've been trying to get CrashPlan running on a headless FreeNAS server. I have found lots a tutorial to do this. However the fact is that I'm missing the .un_info file on my FreeNAS server after installing CrashPlan.
I have searched the whole file system to try and find the elusive .ui_info file.
I've tried creating it manually with information copied from desktop PC but that does not help me resolve my CrashPlan Pro app connecting to the Crashplan server service on FreeNAS.
INFO:
FreeNAS 9.3 STABLE
Crashplan 3.6.3_1 Plugin
The crashplan remote access behaviour changed several times during the last updates, however with version 3.6.3_1 you should find the .ui_info file in
/var/lib/crashplan/.ui_info
Although the jail version is 3.6.3 it's possible that Crashplan updated itself, please check this with:
tail -f /usr/pbi/crashplan-amd64/share/crashplan/log/service.log.0
In the end you want your Crashplan to update itself anyway. If the update process produces an error related to bash, please run:
pkg update
pkg install bash
ln -siv /usr/local/bin/bash /bin/bash
And restart crashplan while checking the log output with the tail -f command from above:
service crashplan restart
If you finally reach a recent version (>4.4.1), its time to remotely connect to crashplan.
The only change on the server necessary for the easiest method without ssh tunnel is the serviceHost tag in /usr/pbi/crashplan-amd64/share/crashplan/conf/my.service.xml.
<serviceUIConfig>
<serviceHost>0.0.0.0</serviceHost>
Either do this everytime you want to connect, because the token will change after every crashplan restart or use my script from here (for OS X): https://gist.github.com/Phlogi/8654e353786ed1cf0858
Copy /var/lib/crashplan/.ui_info to the correct place on your desktop machine and edit the IP address at the end (to your servers address), for example:
4339,7f1d655f-*****,192.168.1.20
That's it, you can start crashplan on your remote machine and it will connect properly, there are no other changes neccessary. Latest crashplan (>4.4.1) will actually use the IP address from .ui_info.
Install JRE. You will need to add --no-check-certificate to the JRE wget line in the install.sh file

Deploying Custom Cartridges on Openshift Origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available