A resource named jms/ressource.factory already exists - glassfish

I get this error when i try to create a new JMS ressource :
A resource named jms/cache.erpSSO.factory already exists.
But i'm sure that the ressource not exist, i test with
[glassfish#F-das ~]$ asadmin list-jms-resources
jms/sifc
jms/cache.harmoSSO
jms/suivi
jms/cache.moduleSSO
jms/sifcFactory
jms/cache.harmoSSO.factory
jms/suiviFactory
jms/cache.moduleSSO.factory
Command list-jms-resources executed successfully.
I restart the server and i still get the same error.
Someone have an idea about that?
Thank you.

Related

ERROR: git repository not found at `https://github.com/JuliaRegistries/General.git`

question: I have copied one of my julia project from local pc to a cluster using Linux. When I was trying to run instantiate in Pkg-REPL, ERROR: git repository not found at `https://github.com/JuliaRegistries/General.git occurred. And I have no idea.
enter image description here
what I have checked: I have entered status in Pkg-REPL and it seems everything is well; and there is only one direction logs under ~/.julia
enter image description here
so what caused the problem and how should I fix it? Thank you in advance!
I finally solve this problem by cloning from https://github.com/JuliaRegistries/General.git manually.
cd ~/.julila
mkdir registries and cd registries
git clone https://github.com/JuliaRegistries/General.git and get a new direction General
now I can instantiate without any error occurred!
I am not still clear why the error occurred when I clone this site automatically using instantiate, so this is still a question to be answered.
I'm not sure if these could be the causes, but:
Can the server connect to GitHub? i.e., can you ping www.github.com? It is possible that a work-based server is hidden behind some firewall. Of course, quite odd to not allow outbound traffic.
Also, on your server account, have you had your public key registered with GitHub? It's been a while since I've had to do this, but you may need to also set your username and e-mail address. An easy test would be to just try doing a git clone https://github.com/JuliaRegistries/General.git on the server and see if that is successful. If not, perhaps the error message that it gives will be helpful.

No MetadataProvider available - shibsp::ConfigurationException

I recently upgraded Shibboleth from versionShibboleth-sp-2.5.6.0-win64 to Shibboleth-sp-2.6.0.0-win64 and Apache web server from 2.4.16 to 2.4.23.
Post the upgrade, when I try to access my application I get the following error:
shibsp::ConfigurationException
The system encountered an error at Fri Oct 14 20:19:51 2016
To report this problem, please contact the site administrator at root#localhost.
Please include the following message in any email:
shibsp::ConfigurationException at (https://xxxxxx.xxxx/)
No MetadataProvider available.
When I access, https:/xxxxx.xxxxx/Shibboleth.sso/Metadata, the metadata file is downloaded and the details seems correct.
Does any one know why does this error occur and how can we solve it?
If it can be of help, I was writing this:
<MetadataProvider type="XML" validate="true" file="/etc/shibboleth/idp-metadata.xml" />
instead of this:
<MetadataProvider type="XML" validate="true" path="/etc/shibboleth/idp-metadata.xml" />
The XML attribute is path. I'm using Shibboleth SP version 3.
Ensure that you have a section in the default as well as an override if there exists. For me, even though there was a section properly created for the override, it needed one in the defaults
Just for the record. Most configuration of your SP takes place in shibboleth2.xml. Locate this file on your server and edit settings to your comfort.
For Linux installations:
Be sure not to edit this file from your installation path, but in your distribution path (i.e. /etc/shibboleth/shibboleth2.xml), otherwise your changes will not be visible ...
A restart of shibd (systemctl restart shibd) is mandatory after changing shibboleth2.xml.
Try the following steps:
1) Go to shar.log and check what is the entity ID returning from the IDP's assertion message.
2) Go to the corresponding IDP'S metadata in SP side, compare both entity ID's.
3) Sure there must be some mismatch between the files, so that's why SP is unable to find the IDP to which it is talking and not able to proceed further.
Finally, update the entity ID in the IDP's metadata and restart shibd. It should work.

ERROR: The overall deployment failed because too many individual instances failed deployment

I'm trying to deploy using CircleCI -> S3 -> CodeDeploy -> EC2.
I was able to upload deploy image onto S3 from CircleCI, but unable to deploy S3 to EC2 instance. Here's the error.
The overall deployment failed because too many individual instances
failed deployment, too few healthy instances are available for
deployment, or some instances in your deployment group are
experiencing problems. (Error code: HEALTH_CONSTRAINTS)
The error was provided from CodeDeploy. I can't figure out why and how.
I'd appreciate if you give some advise.
If you are running on Ubuntu there might be plenty of reasons, here is a checklist can verify
Check code-deploy agent is installed on your EC2 Instance. Please refer this document to install code deploy agent.
https://docs.aws.amazon.com/codedeploy/latest/userguide/codedeploy-agent-operations-install-ubuntu.html
$ sudo service codedeploy-agent status
In case if you are running Ubuntu release 20.x and you get this error
./install:22:in block in method_missing': undefined method path' for
#<IO:> (NoMethodError)
try running the install file via this script
sudo ./install auto > /tmp/logfile
Check you have EC2 Instance Code Deploy Role -> Create a code deployment role and assign it to the Instance, https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html.
In case if you assign the EC2 Role after initiate, restart the server.
Check your appsec.yml file placement as per the top answer, try to avoid any long timeout in it.
Log into your instance check your error log
$ tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log
You should be able to figure out what caused the individual instances to fail by digging into the deployment instance details:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-view-instance-details.html
These should contain more detailed information about why your application was unable to be deployed.
This error is commonly due to problems in the configuration of the appSpec.yml or appSpec.json file (It depends on the format you are using).
"If you have any Hook I recommend that you remove them, check if it works, then you can add one by one (the Hooks) and so you can identify the error"
The appspec.yml file should be located at the root of your project:
│-- appspec.yml
│-- index.html
└-- scripts
│-- install_dependencies
│-- start_server
└-- stop_server
In the scripts folder you will have to place the processes that you want to be executed according to the Hook
Here is an example of the appspec.yml file
version: 0.0
os: linux
files:
- source: /index.html
destination: /var/www/html/
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
ApplicationStop:
- location: scripts/stop_server
timeout: 300
runas: root
I hope I can help you 😃👻🕺🏾
Make sure the CodeDeploy Host Agent Service is running in your target EC2 instance.
The error you are facing is a generic error message thrown on any of the event failure which could be beforeblockTraffic, blockTraffic, ApplicationStop etc.
The first step in this case would be check whether code deploy agent is running or not if first event i.e. BeforeBlockTraffic event is failed.
As you can see in the screenshot below, the event failure message would tell you the exact error behind.
From the failed deployments, I can see all lifecycle events were skipped. Instance i-0bcc36e73851297f2 is currently in Stopped state but I can see the IAM instance profile is missing. Your Amazon EC2 instances need permission to access the Amazon S3 buckets or GitHub repositories where the applications that will be deployed by AWS CodeDeploy are stored. To launch Amazon EC2 instances that are compatible with AWS CodeDeploy, you must create an additional IAM role, an instance profile. 1
For such failures, you can always begin with a general troubleshooting checklist for a failed deployment 2 and then look for troubleshooting guides on Deployment Issues and Instance issues3.
1[http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html]1
2 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting-general.html]2
3 [http://docs.aws.amazon.com/codedeploy/latest/userguide/troubleshooting.html]3
Check the status of the Code Deploy Agent. In my case, the agent wasn't up.
Please check the role given to the ec2 machine(where the agent is running). It should have s3 access as well. This resolved my issue.
"The CodeDeploy agent did not find an AppSpec file within the unpacked revision directory at revision-relative path 'appspec.yml'"
Please place your appspec.yml file in your root folder to solve this error
To access your after script and before script
The overall deployment failed because too many individual instances failed deployment, too few healthy instances are available for deployment, or some instances in your deployment group are experiencing problems.

Could not connect to ActiveMQ Server - activemq for mcollective failing

We are continuously getting this error:
2014-11-06 07:05:34,460 [main ] INFO SharedFileLocker - Database activemq-data/localhost/KahaDB/lock is locked... waiting 10 seconds for the database to be unlocked. Reason: java.io.IOException: Failed to create directory 'activemq-data/localhost/KahaDB'
We have verified that activemq is running as activemq, we have verified that the owner of the directories are activemq. It will not create the directories automatically, and if we create them ourselves, it still gives the same error. The service starts fine, but it will just continuously spit out the same error. There is no lock file as it will not generate any files or directories.
Another way to fix this problem, in one step, is to create the missing symbolic link in /usr/share/activemq/. The permissions are already set properly on /var/cache/activemq/data/, but it seems the activemq RPM is not creating the symbolic link to that location as it should. The symbolic link should be as follows: /usr/share/activemq/activemq-data -> /var/cache/activemq/data/. After creating the symbolic link, restart the activemq service and the issue will be resolved.
I was able to resolve this by the following:
ensure activemq is owner and has access to /var/log/activemq and all sub dirs.
ensure /etc/init.d/activemq has: ACTIVEMQ_CONFIGS="/etc/sysconfig/activemq"
create file activemq in /etc/sysconfig if it doesnt exist.
add this line: ACTIVEMQ_DATA="/var/log/activemq/activemq-data/localhost/KahaDB"
The problem was that activeMQ 5.9.x was using /usr/share/activemq as its KahaDB location.

How do I delete a Workflow Manager 1.0 child scope with permissions messed up?

I have Workflow Manager installation with an "OAuthS2SSecurityConfiguration" applied to the child workflows that I want to delete.
When I run any of the commands below, I get an "internal server error". Since I see nothing in the EventLogs or local logs, I can only assume that this is an authentication issues (I have deleted workflows in the past with no issue.)
var dcli = new WorkflowManagementClient("http://host:port/PubSubActivitiesSample")
dcli.Activities.Delete()
dcli.Workflows.Delete()
dcli.CurrentScope.Delete()
I've also tried overwriting the workflow with a null security configuration. I get an internal HTTP error on that too.
Finally I ran the following powershell command:
remove-wfscope -ScopeURI http://myhost:12291/PubSubActivitiesSample
I get an internal server error for this as well:
Remove-WFScope: An internal error occurred. For more details please see the server logs. HTTP headers received from the server: ...