I want to store data with GeoMesa into data store (e.g. Redis) and to visualize/publish this data with GeoServer.
I develop an interface (and the classes which implements this interface) in Java to store data in a Redis server. Then, the plugin "GeoServer with Redis" was installed.
Thus, when I add a new vector data source, GeoServer offers me the option "Redis (GeoMesa)". I get an error when I submit the parameters of this new data source in GeoServer. I try it before and after storing data in Redis, and the results are the same.
Redis was installed by the official Docker image.
Parameters to create a data
redis.url='localhost:6379'
redis.catalog='geomesa'
redis.connection.pool.size='16'
geomesa.query.threads='8'
geomesa.query.timeout=''
redis.pipeline.enabled=FALSE
redis.connection.pool.validate=TRUE
geomesa.stats.enable=TRUE
geomesa.query.audit=TRUE
geomesa.query.loose-bounding-box=FALSE
geomesa.query.caching=FALSE
geomesa.security.auths=''
geomesa.security.auths.force-empty=TRUE
GeoServer prints this output :
Error creating data store, check the parameters. Error message: Could not get a resource from the pool
Unfortunately, I don't access to the stack trace.
Are you sure that your Redis instance is accessible on localhost:6379? Are you running Redis 5+ (GeoMesa was developed against Redis 5)?
You could try running through the Redis GeoMesa quickstart, which would eliminate any potential issues with GeoServer and should also show you a stack trace.
I had a similar issue. However, both my geoserver and redis environment are dockerized and running in a VB Ubuntu virtual machine.
First, I port forwarded my redis port to my host IP. Then I pointed the geoserver redis URL to my host IP address instead of 127.0.0.1 or localhost.
This solved my problem.
If using your host IP address also works for anyone, kindly let us know.
Related
Yesterday, i try to make realtime app with socket io with multiple node, in reference using redis adapter, redis adapter define ip address and port,
in multiple node, app separate with nginx loadbalance, one redis adapter.. it is running well..
my question is, if hundred client hit app, and app hit one redis adapter, i think this can make server slowly.. can implement loadbalance/cluster in redis adapter too ???
i try to find this answer, i found using ioredis, but im not sure...
I've got an express service running in a minikube cluster and I'm trying to set up a Redis client, but when I try run the service with the Redis client created it basically stalls on deployment and times out. As soon as I add the line:
const client = redis.createClient('http://127.0.0.1:6379');
My service will not deploy and run (even running the default with no supplied address causes the same issue).
I'm quite new to Kubernetes in general so I'm not sure if this is potentially an issue with minikube? Like trying to create a client from inside the cluster with that address isn't possible or something along those lines..
I'm completely lost with why just trying to create a client is causing this issue so any advice or direction would be greatly appreciated.
Try using "service-name.namespace-name.svc.cluster.local" instead of IP address to connect to service.
For example: If my service name is car-redis-service and namespace is default then the command goes like
redis.createClient(REDISPORT, redis://car-redis-service.default.svc.cluster.local)
Or
redis.createClient(REDISPORT,car-redis-service.default.svc.cluster.local)
(source)
Here REDISPORT is the port where redis is configured.
For more information on redis in kubernetes refer to this article.
I tried to create a cluster in Apache Geode by providing the hostname and ip address of the remote system in the gemfire.properties file. Somehow, I am not able to create a cluster.
Can anybody please help with steps to create a cluster (including multi-site).
Thank you
It's not clear from the description if you just want to create a simple GemFire cluster or multiple clusters connected through the Geode WAN replication mechanism...
That said, to start a local Geode cluster you can go trough Apache Geode in 15 Minutes or Less, it's a quick introduction that shows you how to use gfsh to start a locator and some servers, create a region, monitor the system using PULSE, etc.
To setup WAN replication, on the other hand, you can go through Configuring a Multi-site (WAN) System, the most important thing to note about this configuration is that your locators need to know about the locators on the remote system, so you need to make sure that the property remote-locators is correctly configured. Once the locators can talk to each other over the WAN, they will share the connection information with the local servers and these, in turn, will be able to communicate with the servers on the remote clusters.
Hope this helps.
Cheers.
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
I've been using ServiceStack PooledRedisClientManager with success. I'm now adding Twemproxy into the mix and have 4 Redis instances fronted with Twemproxy running on a single Ubuntu server.
This has caused problems with light load tests (100 users) connecting to Redis through ServiceStack. I've tried the original PooledRedisClientManager and BasicRedisClientManager, both are giving the error No connection could be made because the target machine actively refused it
Is there something I need to do to get these two to play nice together? This is the Twemproxy config
alpha:
listen: 0.0.0.0:12112
hash: fnv1a_64
distribution: ketama
auto_eject_hosts: true
redis: true
timeout: 400
server_retry_timeout: 30000
server_failure_limit: 3
server_connections: 1000
servers:
- 0.0.0.0:6379:1
- 0.0.0.0:6380:1
- 0.0.0.0:6381:1
- 0.0.0.0:6382:1
I can connect to each one of the Redis server instances individually, it just fails going through Twemproxy.
I haven't used twemproxy before but I would say your list of servers is wrong. I don't think you are using 0.0.0.0 correctly.
Your servers would need to be (for your local testing):
servers:
- 127.0.0.1:6379:1
- 127.0.0.1:6380:1
- 127.0.0.1:6381:1
- 127.0.0.1:6382:1
You use 0.0.0.0 on the listen command to tell twemproxy to listen on all available network interfaces on the server. This mean twemproxy will try to listen on:
the loopback address 127.0.0.1 (localhost),
on your private IP (i.e. 192.168.0.1) and
on your public IP (i.e. 134.xxx.50.34)
When you are specifying servers, the server config needs to know the actual address it should connect on. 0.0.0.0 doesn't make sense. It needs a real value. So when you come to use different Redis machines you will want to use, the private IPs of each machine like this:
servers:
- 192.168.0.10:6379:1
- 192.168.0.13:6379:1
- 192.168.0.14:6379:1
- 192.168.0.27:6379:1
Obviously your IP addresses will be different. You can use ifconfig to determine the IP on each machine. Though it may be worth using a hostname if your IPs are not statically assigned.
Update:
As you have said you are still having issues, I would make these recommendations:
Remove auto_eject_hosts: true. If you were getting some connectivity, then after time you end up with no connectivity, it's because something has caused twemproxy to think there was something wrong with the Redis hosts and reject them.
So eventually when your ServiceStack client connects to twemproxy, there will be no hosts to pass the request onto and you get the error No connection could be made because the target machine actively refused it.
Do you actually have enough RAM to stress test your local machine this way? You are running at least 4 instances of Redis, which require real memory to store the values, twemproxy consumes a large amount of memory to buffer the requests it passes to Redis, this memory pool is never released, see here for more information. Your ServiceStack app will consume memory - more so in Debug mode. You'll probably have Visual Studio or another IDE open, the stress test application, and your operating system. On top of all that there will likely be background processes and other applications you haven't closed.
A good practice is to try to run tests on isolated hardware as far as possible. If it is not possible, then the system must be monitored to check the benchmark is not impacted by some external activity.
You should read the Redis article here about benchmarking.
As you are using this in a localhost situation use the BasicRedisClientManager not the PooledRedisClientManager.