When starting ServiceMix, I'm getting this error on startup.
2017-06-21 16:24:51,647 | ERROR | ctivemq.server]) | configadmin | 3 - org.apache.felix.configadmin - 1.8.12 | [org.osgi.service.cm.ManagedServiceFactory, id=188, bundle=25/mvn:org.apache.activemq/activemq-osgi/5.14.3]: Updating configuration org.apache.activemq.server.598341f8-41a8-446f-b9f2-0de589a8a14c caused a problem: Cannot start the broker
org.osgi.service.cm.ConfigurationException: null : Cannot start the broker
at org.apache.activemq.osgi.ActiveMQServiceFactory.updated(ActiveMQServiceFactory.java:144)[25:org.apache.activemq.activemq-osgi:5.14.3]
at org.apache.felix.cm.impl.helper.ManagedServiceFactoryTracker.updated(ManagedServiceFactoryTracker.java:159)[3:org.apache.felix.configadmin:1.8.12]
at org.apache.felix.cm.impl.helper.ManagedServiceFactoryTracker.provideConfiguration(ManagedServiceFactoryTracker.java:93)[3:org.apache.felix.configadmin:1.8.12]
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceFactoryUpdate.provide(ConfigurationManager.java:1620)[3:org.apache.felix.configadmin:1.8.12]
at org.apache.felix.cm.impl.ConfigurationManager$ManagedServiceFactoryUpdate.run(ConfigurationManager.java:1563)[3:org.apache.felix.configadmin:1.8.12]
at org.apache.felix.cm.impl.UpdateThread.run0(UpdateThread.java:141)[3:org.apache.felix.configadmin:1.8.12]
at org.apache.felix.cm.impl.UpdateThread.run(UpdateThread.java:109)[3:org.apache.felix.configadmin:1.8.12]
at java.lang.Thread.run(Thread.java:745)[:1.8.0_121]
Caused by: javax.management.InstanceAlreadyExistsException: org.apache.activemq:type=Broker,brokerName=amq-broker
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)[:1.8.0_121]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)[:1.8.0_121]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:966)[:1.8.0_121]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:900)[:1.8.0_121]
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:324)[:1.8.0_121]
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)[:1.8.0_121]
at org.apache.activemq.broker.jmx.ManagementContext.registerMBean(ManagementContext.java:408)[25:org.apache.activemq.activemq-osgi:5.14.3]
at org.apache.activemq.broker.jmx.AnnotatedMBean.registerMBean(AnnotatedMBean.java:72)[25:org.apache.activemq.activemq-osgi:5.14.3]
at org.apache.activemq.broker.BrokerService.startManagementContext(BrokerService.java:2584)[25:org.apache.activemq.activemq-osgi:5.14.3]
at org.apache.activemq.broker.BrokerService.start(BrokerService.java:608)[25:org.apache.activemq.activemq-osgi:5.14.3]
at org.apache.activemq.osgi.ActiveMQServiceFactory.updated(ActiveMQServiceFactory.java:140)[25:org.apache.activemq.activemq-osgi:5.14.3]
nothing has been deployed to it, and the only changes so far is that camel-http4, camel-jetty9, and camel-mongodb features have been installed.
What could be causing this and how can I fix it?
I've figured out the cause. ServiceMix was started, had the features installed, stopped, zipped, sent to a new machine, and unpacked in a different directory.
The problem was fixed by deleting the following folder
apache-servicemix-7.0.0\data\cache\bundle3\data\config\org\apache\activemq\server
Which contained ActiveMQ config information that was no longer valid after the server was moved.
Other systems appeared to also be affected by this. The proper way to fix it seems to be to either delete the data directory while karaf is offline, or to start karaf with the clean flag. (Note, this will wipe all changes from the base version though)
I have since moved on to using the karaf-maven-plugin for pre-setting up the server, and only installing the servicemix components I'm actually using.
Related
Steps to reproduce are very easy.
Create a Dockerfile.
My Dockerfile has many more lines, but I have trimmed them so we can focus in the source of the problem.
Said that, these two lines alone (without anything more) show the problem.
FROM microsoft/iis
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue'; $VerbosePreference = 'Continue'; "]
Run docker build . and you get hcsshim::PrepareLayer - failed failed in Win32: Función incorrecta. (0x1).
Windows 10 Pro 1909 (but it happened too in 1903)
Docker version: 2.1.0.5
Engine: 19.03.5
Machine: 0.16.2
I have found the solution to the problem.
Reading all the https://github.com/docker/for-win/issues/3884 bug, some have found a simple solution: rename C:\windows\system32\driver\cbfsconnect2017.sys so it isn't loaded the next boot.
Disabling that driver enables me to do a docker build for the first time in windows containers in almost a year.
In my case Box Sync was the one using that driver.
EDIT: #GustavoTM have found that pCloud raises the same problem.
EDIT2: #VonC have noticed that some people in the issue in GitHub has solved it deleting this other file: C:\Windows\System32\drivers\cbfs6.sys. I haven't tried that, but i put it if it helps others.
The good thing is that I don't need to uninstall Box, but only rename that file.
This is still an issue (still open) with Win10.
Looks like uninstalling cloud storage providers with file system filters like Dropbox, Box, etc. as a workaround is an option for some users.
Deinstall cloud storage providers or virus scanners; if you identify which one is not working please share in https://github.com/docker/for-win/issues/3884
In my case was the problem similar but the file cbfs6.sys was placed somewhere in the rest of uninstalled application Jungle disk, somewhere in the folder c:\Program files\Jungle disk .... It's part of Callback File System signed by EldoS Corporation.
The folder could be rename only and not delete directly. So I could delete its immediately after the PC restart, before running the Docker. So it could be delete during the Docker service restart too.
I am trying to get TeamCity up and running for a CI / CD server. So far I have it connected to my Git repo, it pulls the repo and builds. Great.
Now I am trying to publish it. (My web server is also the CI server and agent).
I keep getting this error:
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): warning MSB3026: Could not copy "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". Beginning retry 8 in 1000ms. The process cannot access the file '\pagefile.sys' because it is being used by another process.
It ultimately fails and fails the entire publish process.
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): error MSB3027: Could not copy "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". Exceeded retry count of 10. Failed.
C:\TeamCity\buildAgent\work\f56e1490ff15a5c4\P4P.Web\P4P.Web.csproj(1373, 5): error MSB3021: Unable to copy file "\pagefile.sys" to "C:\inetpub\wwwroot\P4P\build\pagefile.sys". The process cannot access the file '\pagefile.sys' because it is being used by another process.
I found this ow SO. I tried downgrading the Microsoft.CodeDom.Providers.DotNetCompilerPlatform and Microsoft.Net.Compilers packages to 1.0.0. I have even tried removing them entirely.
I have looked at all csproj files for references to these packages (including the package.config). Nothing.
I have no idea where to even begin to fix this.
My server is running Windows Server 2012 R2. I installed VS professional.
Any ideas?
I encountered the same issue.
The problem starts when you upgrade the DotNetCompilerPlatform to version 1.0.1.
To work around this issue you can downgrade to version 1.0.0 using the NuGet package manager.
EDIT: If you uninstall Microsoft.CodeDom.Providers.DotNetCompilerPlatform AND Microsoft.Net.Compilers, and then install the DotNetCompilerPlatform (has a dependency on the Microsoft.Net.Compilers package so it will automatically install that) package again the error disappears for good so it seems.
Also check this issue link - error MSB3027: Could not copy "C:\pagefile.sys" to "bin\roslyn\pagefile.sys". Exceeded retry count of 10. Failed
I've pushed my artifact to oss nexus repo, added it as dependency to another project. Idea keeps me warning:
[warn] Unable to reparse com.github.kondaurovdev#jsonapi_2.11;0.1-SNAPSHOT from sonatype-snapshots, using Fri May 13 17:12:52 MSK 2016 [warn] Choosing sonatype-snapshots for com.github.kondaurovdev#jsonapi_2.11;0.1-SNAPSHOT
Maybe i pushed artifact somehow in a wrong way? But i did it earlier, everything was ok. How to get rid of these warnings? Or just ignore them?
I had the same issue.
Did you publish your SNAPSHOT version to your artifactory? If so this might be your problem.
As you know when publishing locally your snapshot version is stored in the .ivy2/local directory. The remote version are stored in the .ivy2/cache directory.
When looking into the .ivy2/cache/{dependency} folder you will see that it has only downloaded the xml and properties file. So just the metadata and no jars. This is the actual reason why it can't be parsed since it's not there.
Since the .ivy2/cache takes precedence over .ivy2/local it won't see your local published version. There are 2 ways to fix this.
Update your snapshot version number(recommended)
Remove the SNAPSHOT from your artifactory and remove the .ivy2/cache/{dependency} folder on every client that has a local version.
In my opinion the first one is the way to go.
I had the same issue, and it goes away after I add the follow in my build.sbt:
updateOptions := updateOptions.value.withLatestSnapshots(false)
You can find more detail from https://github.com/sbt/sbt/issues/2650
Strange event is happening in a Mule project. I have the application xml which is JPC.xml. This normally appears in the mule-deploy.properties as follows
redeployment.enabled=true
encoding=UTF-8
config.resources=JPC.xml
domain=default
When I choose Run As, Mule Application Which kicks off the build in the background prior to the deploy and run. During that time the mule-deploy.properties becomes:
redeployment.enabled=true
encoding=UTF-8
config.resources=
domain=default
And when the application runs it says it is missing the mule-config.xml
What is erasing it?
I think I may have found the root of this issue.
It seems to be a bug related to jdk_1.7.0_45 having to do with xml parsing. see: What's causing these ParseError exceptions when reading off an AWS SQS queue in my Storm cluster
I noticed several errors logged in eclipse/anypoint as:
!ENTRY org.mule.tooling.core 4 0 2014-11-19 14:16:41.081
!MESSAGE Error opening resource measurement_scheduler.xml
!STACK 0
javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,1]
Message: JAXP00010001: The parser has encountered more than "64000" entity expansions in this document; this is the limit imposed by the JDK.
I also noticed that after restarting Anypoint, I would be able to build with maven successfully and my mule-deploy.properties file would again have content. Until...at some point after several edits to things in Anypoint, I would again get mvn build that wiped out the contents of mule-deploy.properties.
I further noticed that once this problem started to happen in one project in Anypoint, it would ALSO start happening in ANY project I built in Anypoint...until restart of Anypoint.
It seems this bug in jdk 1.7.0_45 mistakenly applies this limit in the xml parser to all opened files cumulatively, instead of per file. I suspect this causes Anypoint to not finish parsing all of the xml docs that make up my app and therefore couldnt re-create the mule-deploy.properties...leaving it blank.
Upgrading to newer jdk would fix this.
Another way to work around it is to override this limit for xml parser by adding the following to ${JAVA_HOME}/jre/lib/jaxp.properties:
jdk.xml.entityExpansionLimit=0
jdk.xml.maxGeneralEntitySizeLimit=0
I am not certain that both limits need set to work-around this. Possibly only entityExpansionLimit is needed.
After making this change I am now happily able to use Anypoint again. Beware that using this work-around possibly opens you up to a denial-of-service attack through the xml parser if your same jre is used for other not-so-trusted processes.
I'm following these steps outlines on this link, however when I try to start the server nothing happens nor can I connect to anything from the client. Does anyone know how to run this?
when I try from a command prompt instead of double clicking the redis-server.exe I get this message
[11868] 23 Jul 11:58:26.325 # QForkMasterInit: system error caught. error code=0
x000005af, message=VirtualAllocEx failed.: unknown error
http://bartwullems.blogspot.ca/2013/07/unofficial-redis-for-windows.html
The easiest way to install Redis is through NuGet:
Open Visual Studio
Create an empty solution so that NuGet knows where to put the packages
Go the Package Manager Console: Tools –> Library Package Manager –>Package Manager Console
Type Install-Package Redis-64
image
Go to the Packages folder and browse to the Tools folder. Here you’ll find the Redis-server.exe. Double click on it to start it.
Redis is ready to use and start’s listening on a specific port(6379 in
my case)
image
Let’s open up a client and try to put a value into Redis. Start Redis-cli.exe. It already connects to the same port by default.
image
Add a value by executing following command:
image
Read the value again:
image
Try to run with redis-server --maxheap 4000000
Miguel is correct, but it is not that simple. To start redis-server either as a service or from the command prompt, the amount of available RAM and disk space must be sufficient for Redis to run as configured.
Now, if no configuration file is specified when running Redis, it will use the default configuration values. All of this is documented in the redis.windows.conf file as well as in the document "Redis on Windows.docx" (both deployed with the redis installation).
In my experience, errors when starting Redis usually come from lack of available resources (RAM or disk space) or some incorrect configuration of maxhead or maxmemory parameters.
To troubleshoot this kind of behavior, check your system's available resources and try running redis-server from the command line varying the parameters maxmemory, maxheap, and/or heapdir. The loglevel parameter set to verbose might also help diagnosing the issue.
Regards