Tuning JVM cloudbees - jvm

Well, i have a app running on cloudbees, my app need some extra java memory, this app use (hibernate and spring).
Reading in other post and the cloudbees document, i think the way to change a max and min of memory on JVM is by this way: "bees app:deploy -a account/appId -R JAVA_OPTS="-Xms512m -Xmx512m /target/app.ear" but when i do this and try to run the app, throw the next exception
Error occurred during initialization of VM
Incompatible minimum and maximum heap sizes specified
What i´m doing wrong and what can i do to resolve this problem?
On adding, i'm using Jboss and when run "bees app:info" the info is the next:
Application : account/appId Title : account/appId Created : Mon Aug 04 11:49:18 EDT 2014 Status : active URL : ... clusterSize : 1 container : java_small containerType : jboss71 hibernateTimeout: 7200 jvmPermSize : 256 maxMemory : 256 proxyBuffering : false securityMode : PUBLIC
Thanks

Well finally i found my error.
I solved this problem paying a account in cloudbees. A free account doesn't allow increment a JVM memory above 256mb. When i try increment this parameter Xms512m, the minimum memory exceed maximum memory

Related

Solr 'service unaveliable'

So, i've been following 'djnaog by example' and one of the assignments is to integrate django and solr with haystack. I tried to follow the readme included in solr package, but idk why, it doestn seem to work when i visit localhost 8983. At http://127.0.0.1:9999/solr/ i get message 'Service Unavailable', in terminal i get :
"* [WARN] Your open file limit is currently 1024. It should
be set to 65000 to avoid operational disruption. If you no longer
wish to see this warning, set SOLR_ULIMIT_CHECKS to false in your
profile or solr.in.sh
[WARN] * Your Max Processes Limit is currently 62864. It should be set to 65000 to avoid operational disruption. If you no
longer wish to see this warning, set SOLR_ULIMIT_CHECKS to false in
your profile or solr.in.sh Waiting up to 180 seconds to see Solr
running on port 9999 [|] Started Solr server on port 9999
(pid=11255). Happy searching!"
I have no idea what to do so please help
Let me add that i used
./bin/solr start
in bash
Update:
Tried:
./solr status commnad
Output:
"Found 4 Solr nodes: Solr process 11255 from
/home/pawe/Pulpit/solr-8.1.1/solr/bin/solr-9999.pid not found.
Solr process 10702 from
/home/pawe/Pulpit/solr-8.1.1/solr/bin/solr-8888.pid not found.
Solr process 3174 running on port 8983
Error: Could not find or load main class org.apache.solr.util.SolrCLI
Caused by: java.lang.ClassNotFoundException:
org.apache.solr.util.SolrCLI Solr process 11124 from
/home/pawe/Pulpit/solr-8.1.1/solr/bin/solr-8866.pid not found. "

Aerospike - “n_bytes_memory went negative” with in memory-only namespace with ttl

We have a namespace configured to store data in memory only with couple of minutes default ttl. After starting putting some data into it, when expiration kicks in, we're getting these messages in the log (a lot, for ~30% of expired records):
WARNING (namespace): (namespace.c::762) set_id 1 - n_bytes_memory went negative!
I have simple client app with server config that can reproduce this: https://github.com/akkomar/aerospike-test (it's based on docker and is very easy to start)
Any advice what might be the reason?
Edit:
I checked this on versions 3.6.4, 3.7.0.1 and 3.7.4
Configuration file used for testing (from https://github.com/akkomar/aerospike-test/blob/master/etc/aerospike.conf):
service {
user root
group root
paxos-single-replica-limit 1
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 1024
}
logging {
file /var/log/aerospike/aerospike.log {
context any info
}
console {
context any info
context namespace detail
}
}
network {
service {
address any
port 3000
}
heartbeat {
mode mesh
port 3002
mesh-port 3002
interval 150
timeout 10
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace test_ns {
replication-factor 2
memory-size 1G
default-ttl 10S
storage-engine memory
}
Edit2:
It seems that it's happening only if I update records via UDF. The simplest one that reproduces this:
local VAL_KEY = "v"
function add_data(rec, val_to_add, ttl_to_set)
if aerospike:exists(rec) then
rec[VAL_KEY] = val_to_add
aerospike:update(rec)
else
rec[VAL_KEY] = val_to_add
aerospike:create(rec)
end
end
When I execute the same operation via Java API - everything seems to work fine (example github repo mentioned earlier is updated with Java API example)
The meaning of the error message is that the space we have accounted for the set in memory went to a negative number which should not be possible.
This has been logged in our internal bug tracking system for resolution in future releases
It turned out it was a bug in Aerospike.
It's fixed in version 3.7.4.1 (detailed explanation in https://discuss.aerospike.com/t/problem-with-expiring-records-in-memory-only-namespace-n-bytes-memory-went-negative/2560/6)

Basic IdentityServer configuration issue

I am trying to get the IdentityServer v3 AspNetIdentity example running. I downloaded it and changed the connection strings in the Host project's App.config to use my local SQL server. I didn't change anything else.
When I run the project I get this:
SelfHost.vshost.exe Warning: 0 : [Thinktecture.IdentityServer.Core.Configuration.IdentityServerServiceFactory]: 18/12/2014 10:55:51 PM -- AuthorizationCodeStore not configured - falling back to InMemory
SelfHost.vshost.exe Warning: 0 : [Thinktecture.IdentityServer.Core.Configuration.IdentityServerServiceFactory]: 18/12/2014 10:55:51 PM -- TokenHandleStore not configured - falling back to InMemory
SelfHost.vshost.exe Warning: 0 : [Thinktecture.IdentityServer.Core.Configuration.IdentityServerServiceFactory]: 18/12/2014 10:55:51 PM -- ConsentStore not configured - falling back to InMemory
SelfHost.vshost.exe Warning: 0 : [Thinktecture.IdentityServer.Core.Configuration.IdentityServerServiceFactory]: 18/12/2014 10:55:51 PM -- RefreshTokenStore not configured - falling back to InMemory
I get that it has a problem setting up required databases, but there's not much feedback on exactly what the problem is. Any ideas?
These are warnings, not errors. The current docs explain what's mandatory and what's optional to configure: https://identityserver.github.io/Documentation/docsv2/configuration/serviceFactory.html

Issue with Open Shift Origin Mongo DB service

I have installed OpenShift Origin V3 on aws ec2(Fedora19) using oo-install.The set up is One Broker +One Node.
I was making some modifications to the security groups to make it more restrictive -
and it ended up some issues in the mongo service.
1.service mongod does not start up and the status shows failed.
The /var/log/mongodb/mongodb.log says
Thu Mar 6 11:24:08.189 [initandlisten] ERROR: listen(): bind() failed errno:99 Cannot assign requested address for socket: :27017
Thu Mar 6 11:24:08.189 [initandlisten] now exiting
Running oo-accept-broker -v says
FAIL: error logging into mongo db: MOPED: Retrying connection to primary for replica set :27017">]>: MOPED: Retrying connection to primary for replica set :27017">]>/MOPED: --username Retrying, exit code: 1
Any pointers on how to resolve this will be greatly appreciated.
Thanks
Shabna
I would try rolling back your changes to the security groups first and then make the changes one by one and see which one causes the issue, then post that to stack and see if anyone can comment on the specific change that is affecting mongodb.

VxWorks boot hang (starting at 0x100000)

I'm trying to boot VxWorks 6.3 on a Wind River SBC83XX PowerQUICC II Pro. I'm using Wind River Workbench as my IDE. I configured the kernel, built it, and attempted to run it, but it hangs on Starting at 0x100000 with no further output.
Here is the output of the terminal after typing # at the prompt:
boot device : mottsec
unit number : 0
processor number : 0
host name : XXXXXXXXXXX
file name : C:\WindRiver\workspace\vxworks-dev\default\vxWorks
inet on ethernet (e) : 69.88.163.22:ffffff00
host inet (h) : 69.88.163.21
gateway inet (g) : 69.88.163.1
user (u) : XXXXXXXXX
ftp password (pw) : XXXXXXXX
flags (f) : 0x0
Attaching interface lo0... done
Attached IPv4 interface to mottsec unit 0
Loading... 1838288
Starting at 0x100000...
Any suggestions would be greatly appreciated; I need this working for a college class on a tight schedule.
Many things could be wrong.
The first thing I would do is check the RAM size. If it exceed the amount on the board, this might happen.
Is the serial port & shell configured? I would suggest adding the standalone shell bundle.
If the symbol table is compiled as standalone, this can sometimes occur based on other configurations.
Compile the symbol table into the image.
In Work Bench:
Kernel Configuration -> CTRL+F, search "built-in symbol table", -> include it.