We have set up a test artifactory server.
We tried to edit the pre-defined backup-daily plan by unchecking the Incremental checkbox and clicking Save.
However, when going back to the edit screen, the check box remains checked.
Is there a reason for that?
This also happens when defining a new (custom) back up plan.
The Incremental check box seems to always remain checked.
Here is the system info.
Here are the logs when going through the process of unchecking Incremental checkbox and clicking Save.
2017-04-07 09:50:35,126 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:394) - Reloading configuration...
2017-04-07 09:50:35,127 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:250) - Saving new configuration in storage...
2017-04-07 09:50:35,154 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:254) - New configuration saved.
2017-04-07 09:50:35,156 [http-nio-8081-exec-9] [INFO ] (o.a.s.ArtifactoryApplicationContext:433) - Artifactory application context set to NOT READY by reload
2017-04-07 09:50:36,561 [http-nio-8081-exec-9] [INFO ] (o.a.s.BaseTaskServiceDescriptorHandler:51) - No Replication configured. Replication is disabled.
2017-04-07 09:50:36,584 [http-nio-8081-exec-9] [INFO ] (o.a.s.ArtifactoryApplicationContext:433) - Artifactory application context set to READY by reload
2017-04-07 09:50:36,586 [http-nio-8081-exec-9] [INFO ] (o.a.c.CentralConfigServiceImpl:406) - Configuration reloaded.
Have you entered a retention period?
When not using the incremental backup you have to specify the retention period in order for it to "stick".
Just to be on the safe side, you can't use "0" as this is reserved to incremental.
Related
I have RabbitMQ running on my test server. The log file has grown into 20 GB and i would like to clear it. I even have a scheduler to delete it timely but it is not working due to below issue.
Issue:
If i delete the log file either manually or via a scheudled script, the file automatically gets restored. How do i get this fixed?
Rabbitmq.Config file looks like below,
[
{rabbit, [
{ssl_listeners, [1111]},
{ssl_options, [{cacertfile,"D:\\RabbitMQ Server\\Certs\\certname.cer"},
{certfile,"D:\\RabbitMQ Server\\Certs\\cer_cername_host.cer"},
{keyfile,"D:\\RabbitMQ Server\\Certs\\cer_cername_host.pfx"},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]}
]}
].
Fixed this issue.
Acually it was an issue of config file not having correct name. Someone stored the config file name as rabbitmq copy.config rather than rabbitmq.config.
I can see underm overview sections it is refering to the correct Rabbitmq CONFIG. previously it was not picking up a config.
I'm trying to setup Azure WAF (v2) on my App Gateway (currently in detection mode first to handle false positive cases), however, I'm seeing this warning:
To view your detection logs, you must have diagnostics enabled.
So, I went to Diagnostic settings and created it there with following options:
Log:
ApplicationGatewayAccessLog - (checked)
ApplicationGatewayPerformanceLog - (checked)
ApplicationGatewayFirewallLog - (checked)
Metric:
AllMetrics - (checked)
I have Send to Log Analytics checked as well. Also Archive to a storage account enabled.
But I'm still seeing the same warning mentioned above.
Any idea what I might be missing here?
UPDATE, I do see records within log with following query, but warning is still there:
AzureDiagnostics | where OperationName == "ApplicationGatewayFirewall"
My Corda application is working well except for the permissions management. Currently every node can start ever flow, however this should not be possible. I tried to restrict the permissions of certain nodes in the build.gradle file. Here is one node as an example:
node {
name "O=PartyA,L=Paris,C=FR"
p2pPort 10008
rpcSettings {
address("localhost:10009")
adminAddress("localhost:10049")
}
rpcUsers = [[ user: "user2", password: "test", permissions: ["StartFlow.FlowInitiatorOne", "StartFlow.FlowInitiatorTwo"]]]
}
I deploy my network using the deployNodes command. My flows are written in Java. Regardless of the permissions, PartyA is able to start all flows. The log file of PartyA shows that all flows are registered, before the permissions are added to the node.
[INFO ] 2019-12-13T09:35:25,796Z [main] internal.NodeFlowManager.registerInitiatedFlow - Registered com.template.flows.FlowInitiatorOne to initiate com.template.flows.FlowResponderOne (version 1)
[INFO ] 2019-12-13T09:35:25,797Z [main] internal.NodeFlowManager.registerInitiatedFlow - Registered com.template.flows.FlowInitiatorTwo to initiate com.template.flows.FlowResponderTwo (version 1)
[INFO ] 2019-12-13T09:35:25,798Z [main] internal.NodeFlowManager.registerInitiatedFlow - Registered com.template.flows.FlowInitiatorThree to initiate com.template.flows.FlowResponderThree (version 1)
[INFO ] 2019-12-13T09:35:25,800Z [main] internal.NodeFlowManager.registerInitiatedFlow - Registered com.template.flows.FlowInitiatorFour to initiate com.template.flows.FlowResponderFour (version 1)
[INFO ] 2019-12-13T09:35:25,793Z [main] internal.NodeFlowManager.registerInitiatedFlow - Registered com.template.flows.FlowInitiatorFive to initiate com.template.flows.FlowResponderFive (version 1)
Below the flow registrations, the log file shows the user with the right permissions
[INFO ] 2019-12-13T09:35:55,434Z [main] security.RPCSecurityManagerImpl.buildImpl - Constructing realm from list of users in config [User(user2, permissions=[StartFlow.FlowInitiatorOne, StartFlow.FlowInitiatorTwo])]
If I enter flow list in the terminal, PartyA will tell me that it can start all five flows. How do I fix this problem?
Your setup is correct and what you see in the log makes sense as well.
1. When the node starts, it scans the cordapps folder and registers all the flows that it sees.
2. Since you are connecting to the node directly (not through ssh or using the standalone shell) and your node is in dev mode; then Corda connects you to the node as user shell with password shell and you can run all flows.
3. To test your RPC user, you would have to write a client that connects to your node using the test user; that client will be restricted to calling only the 2 flows that you specified.
Read about different the types of accessing the node: https://docs.corda.net/shell.html
You can see a sample client in R3's cordapp-example (it's in Kotlin):
1. In the controller class, you call the flows using the proxy: https://github.com/corda/samples/blob/release-V4/cordapp-example/clients/src/main/kotlin/com/example/server/MainController.k
2. Notice how the Gradle task to run that webserver uses the defined RPC user: https://github.com/corda/samples/blob/69ff8d4a668c520b6695be67864f4f96ab7ec809/cordapp-example/clients/build.gradle#L64
3. The Java template comes with a predefined clients module as well: https://github.com/corda/cordapp-template-java/tree/release-V4/clients/src/main/java/com/template/webserver
When I get my ElasticSearch server settings via
curl -XGET localhost:9200/_cluster/settings
I see persistent and transient settings.
{
"persistent": {
"cluster.routing.allocation.cluster_concurrent_rebalance": "0",
"threadpool.index.size": "20",
"threadpool.search.size": "30",
"cluster.routing.allocation.disable_allocation": "false",
"threadpool.bulk.size": "40"
},
"transient": {}
}
If I set a persistent setting, it doesn't save it to my config/elasticsearch.yml config file? So my question is when my server restarts, how does it know what my persistent settings are?
Don't tell me not to worry about it because I almost lost my entire cluster worth of data because it picked up all the settings in my config file after it restarted, NOT the persistent settings shown above :)
Persistent settings are stored on each master-eligible node in the global cluster state file, which can be found in the Elasticsearch data directory: data/CLUSTER_NAME/nodes/N/_state, where CLUSTER_NAME is the name of the cluster and N is the node number (0 if this is the only node on this machine). The file name has the following format: global-NNN where NNN is the version of the cluster state.
Besides persistent settings this file may contain other global metadata such as index templates. By default the global cluster state file is stored in the binary SMILE format. For debugging purposes, if you want to see what's actually stored in this file, you can change the format of this file to JSON by adding the following line to the elasticsearch.yml file:
format: json
Every time cluster state changes, all master-eligible nodes store the new version of the file, so during cluster restart the node that starts first and elects itself as a master will have the newest version of the cluster state. What you are describing could be possible if you updated the settings when one of your master-eligible nodes was not part of the cluster (and therefore couldn't store the latest version with your settings) and after the restart this node became the cluster master and propagated its obsolete settings to all other nodes.
I've been using NServiceBus successfully for I don't know how long. The license claimed to expire and informed me that I needed a new license file. So I went to the website and generated a new one (For a dev machine). Everytime I debug I get the same message and it requests the license file. Is there any way to prevent this message from showing up EVERY time I try to debug? (Like set a path programmatically possibly?)
<
Andreas : The ONLY mention of the license in the log file is as follows :
2013-03-05 14:24:23,983 [1] [INFO ] [NServiceBus.Licensing.LicenseManager] - No valid license found.
2013-03-05 14:24:23,986 [1] [DEBUG] [NServiceBus.Licensing.LicenseManager] - Trial for NServiceBus v3.3 has expired.
2013-03-05 14:24:23,988 [1] [WARN ] [NServiceBus.Licensing.LicenseManager] - Falling back to run in Basic1 license mode.
Here's a quick screen capture of the prompt after I select the new file. Just so you know it's SAYING it's a valid file.
I believe this can also happen if your license is for a different version of the software than you are running. You may need to request a license that aligns with your NSB version.
Once you received your free license, did you import it?
You need to click the "Browse..." button and select the license to import it!