Any way to create Cache using remotecachemanger on a remote server? - infinispan

I am trying to setup a replicated cache set up on 2 different servers using Infinispan cache.
Node1 and Node2 are 2 physical servers on which hotrod server is running.
My intention is to create some cache(with custom Configuration) on node1/node2 from a remote client (Node3).
On Node3,
I am trying to do the following..
RemoteCacheManager rm = new RemoteCacheManager ("node1ip4address", portNumber);
rm.getCache("namedcache1"); ----> this method's javadoc says,
/**
* Retrieves a named cache from the remote server if the cache has been
* defined, otherwise if the cache name is underfined, it will return null.
*/
I checked the Source code of RemoteCacheManager. This class does not have defineConfiguration() method such as the one exists in EmbeddedCacheManager.
Is there a way i can create cache on remote node?
Thanks,
-Venkat

No, there is no way to create a cache through the HotRod protocol. Even in embedded mode, Infinispan doesn't have a way to say "create this cache on all the cluster nodes", which you would need with HotRod because you don't know which server you're accessing.
The CacheManager JMX bean has a startCache method, but you still can't define new configurations (they'll use the default cache's configuration). And you need to call it on every node of the cluster.
Obviously, it would be best if you could configure the caches statically in the server's configuration.

We can use the rest call to the management running on 9990 and create the cache,
curl --digest -s -i -u "usr:pwd" -X POST -H 'Content-type: application/json' -d #cacheTemplate.json http://serverurl:9990/management
where the cacheTemplate.json to create a cache with name cart in the container named clustered
{
"address":[
"subsystem",
"datagrid-infinispan",
"cache-container",
"clustered",
"configurations",
"CONFIGURATIONS",
"distributed-cache-configuration",
"cart"
],
"operation":"add",
"mode":"SYNC",
"store-type":"None",
"store-original-type":"None",
"template":false
}

Related

How can I setup kubeapi server to allow kubectl from outside the cluster

I have a single master, multinode kubernetes going. It works great. However I want to allow kubectl commands to be run from outside the master server. How do I run kubectl get node from my laptop for example?
If I install kubectl on my laptop I get the following error:
error: client-key-data or client-key must be specified for kubernetes-admin to use the clientCert authentication method
How do I go about this. I have read through the kubernetes authorisation documentation but I must say it's a bit greek to me. I am running version 1.10.2.
Thank you.
To extend #sfgroups answer:
Configurations of all Kubernetes clusters you are managing
are stored in $HOME/.kube/config file. If you have that file on the master node,
the easy way is to copy it to $HOME/.kube/config file on a local machine.
You can choose other places, and then specify the location by environment value KUBECONFIG:
export KUBECONFIG=/etc/kubernetes/config
or use --kubeconfig command line parameter instead.
Cloud providers often give you a possibility to download config to local machine from the
web interface or by the cloud management command.
For GCP:
gcloud container clusters get-credentials NAME [--region=REGION | --zone=ZONE, -z ZONE] [GCLOUD_WIDE_FLAG …]
For Azure:
az login -u yourazureaccount -p yourpassword
az acs kubernetes get-credentials --resource-group=<cluster-resource-group> --name=<cluster-name>
If the cluster was created using Kops utility, you could get the config file by:
kops export kubeconfig ${CLUSTER_NAME}
From your master copy /root/.kube directory to your laptop C:\Users\.kube location.
kubectl will pickup the certificate from config file automatically.

JMeter can't send data to influxdb in docker environment

I want to use influxdb and grafana in docker environment to show time-series data from jmeter.
I tried the set up from this post: http://www.testautomationguru.com/jmeter-real-time-results-influxdb-grafana/
and the only difference here is, I'm a docker environment. So I set up the influxdb configuration from the information given from docker hub(https://hub.docker.com/_/influxdb/):
I change the configuration file like this:
and type:
"$ docker run -p 8086:8086 \
-v $PWD/influxdb.conf:/etc/influxdb/influxdb.conf:ro \
influxdb -config /etc/influxdb/influxdb.conf"
in termianl,
And finally when I want to get the data from localhost:8083, enter database jemeter, and type"SHOW MEASUREMETNS", nothing shows there.
What might be the reason here?
port 8086 is for HTTP API to add the data. If you use graphite protocol, port 2003 should be enabled and mapped.
docker run -p 8086:8086 -p 2003:2003 ...
will work.
Please check jmeter backendlistner settings. Check here IP of InfluxDb Container and port. it shouldn't be localhost.

How can I know current configuration of a running redis instance

I started a redis instance using rc.local script.
su - ec2-user -c redis-server /home/ec2-user/redis.conf
Even in the configuration file I provided(/home/ec2-user/redis.conf) I specified
protected-mode no
Connection to the redis instance still generates the following error message:
Error: Ready check failed: DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command 'CONFIG SET protected-mode no' from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2) Alternatively you can just disable the protected mode by editing the Redis configuration file, and setting the protected mode option to 'no', and then restarting the server. 3) If you started the server manually just for testing, restart it with the '--protected-mode no' option. 4) Setup a bind address or an authentication password. NOTE: You only need to do one of the above things in order for the server to start accepting connections from the outside.
What can I do to check current configuration of a running redis?
connect localy to your redis and run :
127.0.0.1:6379> CONFIG GET protected-mode
You'll get current running value.
You can run your server with more log :
redis-server /etc/myredis.conf --loglevel verbose
Regards,

How to create a directory in HDFS on Google Cloud Platform via Java API

I am running an Hadoop Cluster on Google Cloud Platform, using Google Cloud Storage as backend for persistent data. I am able to ssh to the master node from a remote machine and run hadoop fs commands. Anyway when I try to execute the following code I get a timeout error.
Code
FileSystem hdfs =FileSystem.get(new URI("hdfs://mymasternodeip:8020"),new Configuration());
Path homeDir=hdfs.getHomeDirectory();
//Print the home directory
System.out.println("Home folder: " +homeDir);
// Create a directory
Path workingDir=hdfs.getWorkingDirectory();
Path newFolderPath= new Path("/DemoFolder");
newFolderPath=Path.mergePaths(workingDir, newFolderPath);
if(hdfs.exists(newFolderPath))
{
hdfs.delete(newFolderPath, true); //Delete existing Directory
}
//Create new Directory
hdfs.mkdirs(newFolderPath);
When executing the hdfs.exists() command I get a timeout error.
Error
org.apache.hadoop.net.ConnectTimeoutException: Call From gl051-win7/192.xxx.1.xxx to 111.222.333.444.bc.googleusercontent.com:8020 failed on socket timeout exception: org.apache.hadoop.net.ConnectTimeoutException: 20000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChannel[connection-pending remote=111.222.333.444.bc.googleusercontent.com/111.222.333.444:8020]
Are you aware of any limitation using the Java Hadoop APIs against Hadoop on Google Cloud Platform ?
Thanks!
It looks like you're running that code on your local machine and trying to connect to the Google Compute Engine VM; by default, GCE has strict firewall settings to avoid exposing your external IP addresses to arbitrary inbound connections. If you're using defaults then your Hadoop cluster should be on the "default" GCE network. You'll need to follow the adding a firewall instructions to allow incoming TCP connections on port 8020 and possible on other Hadoop ports as well from your local IP address for this to work. It'll look something like this:
gcloud compute firewall-rules create allow-http \
--description "Inbound HDFS." \
--allow tcp:8020 \
--format json \
--source-ranges your.ip.address.here/32
Note that you really want to avoid opening a 0.0.0.0/0 source-range since Hadoop isn't doing authentication or authorization on those incoming requests. You'll want to restrict it as much as possible to only the inbound IP addresses from which you plan to dial in. You may need to open up a couple other ports as well depending on what functionality you use connecting to Hadoop.
The more general recommendation is that wherever possible, you should try to run your code on the Hadoop cluster itself; in that case, you'll use the master hostname itself as the HDFS authority rather than external IP:
hdfs://<master hostname>/foo/bar
That way, you can limit the port exposure to just the SSH port 22, where incoming traffic is properly gated by the SSH daemon, and then your code doesn't have to worry about what ports are open or even about dealing with IP addresses at all.

MongoDB - Adding new config servers in production

I've been doing some tests with mongodb and sharding and at some point I tried to add new config servers to my mongos router (at that time, I was playing with just one config server). But I couldn't find any information on how to do this.
Have anybody tried to do such a thing?
Unfortunately you will need to shutdown the entire system.
Shutdown all processes (mongod, mongos, config server).
Copy the data subdirectories (dbpath tree) from the config server to the new config servers.
Start the config servers.
Restart mongos processes with the new --configdb parameter.
Restart mongod processes.
From: http://www.mongodb.org/display/DOCS/Changing+Config+Servers
Use DNS CNAMES
make sure to use DNS entries or at least /etc/hosts entries for all mongod and mongo config servers when you set-up multiple config servers in /etc/mongos.conf , when you set-up replica sets and/or sharding.
e.g. a common pitfall on AWS is to use just the private DNS name of EC2 instances, but these can change over time... and when that happens you'll need to shut down your entire mongodb system, which can be extremely painful to do if it's in production.
The Configure Sharding and Sample Configuration Session pages appear to have what you're looking for.
You must have either 1 or 3 config servers; anything else will not work as expected.
You need to dump and restore content from your original config server to 2
new config servers before adding them to mongos's --configdb.
The relevant section is:
Now you need a configuration server and mongos:
`$ mkdir /data/db/config
$ ./mongod --configsvr --dbpath /data/db/config --port 20000 > /tmp/configdb.log &
$ cat /tmp/configdb.log
$ ./mongos --configdb localhost:20000 > /tmp/mongos.log &
$ cat /tmp/mongos.log `
mongos does not require a data directory, it gets its information from the config server.