I am trying to use Geode Redis Adapter as my server for Rate Limiting provided by Spring Cloud Gateway. If I use a real Redis Server, everything works perfectly, but with Geode Redis Adapter doesn't.
I am not too sure if this functionality is supported.
I tried to start a [Geode image] (https://hub.docker.com/r/apachegeode/geode/) exposing the default Redis port 6739. Starting the container, I executed using gfsh the following commands:
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
When I try to access in my local machine by redis-cli -h localhost -p 6379 I can get connected.
My implementation is simple:
application.yaml
- id: rate-limitter
predicates:
- Path=${GUI_CONTEXT_PATH:/rate-limit}
- Host=${APP_HOST:localhost:8080}
filters:
- name: RequestRateLimiter
args:
key-resolver: "#{#remoteAddrKeyResolve}"
redis-rate-limiter:
replenishRate: ${rate.limit.replenishRate:1}
burstCapacity: ${rate.limit.burstCapacity:2}
uri: ${APP_HOST:localhost:8080}
Application.java
#Bean
KeyResolver remoteAddrKeyResolve() {
return exchange -> Mono.just(exchange.getSession().subscribe().toString());
}
When my application is started and I try to access /rate-limit, I expected to connect to redis and my page be displayed.
However, my Spring application keeps trying to access and can't i.l.c.p.ReconnectionHandler: Reconnected to localhost:6379. So, the page is not displayed and keep loading. FIXED in Edit1 below
Problem is I am using RedisRateLimiter and tried to simulate the access with a for loop. Checking the RedisRateLimiter.REMAINING_HEADER, the value is -1 always. Doesn't seems right, because I don't have this issue in Redis itself.
During the start of the application, I also receive these messages on connection to Geode Redis Adapter:
Starting without optional epoll library
Starting without optional kqueue library
Is anything missing in my Geode Redis Adapter or anything else in Spring?
Thank you
Edit 1: I was missing to start the locator and region, that's why I wasn't able to connect.
start locator --name=locator
start server --name=redis --redis-port=6379 --J=-Dgemfireredis.regiontype=PARTITION_PERSISTENT
create region --name=redis-region --type=REPLICATE_PERSISTENT
Related
I have setup Spark SQL on Jypterhub using Apache Toree SQL kernel. I wrote a Python function to update Spark configuration options in the kernel.json file for my team to change configuration based on their queries and cluster configuration. But I have to shutdown the running notebook and re-open or restart the kernel after running Python function. In this way, I'm forcing the Toree kernel to read the JSON file to pick up the new configuration.
I thought of implementing this shutdown and restart of kernel in a programmatic way. I got to know about the Jupyterhub REST API documentation and am able implement it by invoking related API's. But the problem is, the single user server API port is set randomly by the Spawner object of Jupyterhub and it keeps changing every time I spin up a cluster. I want this to be fixed before launching the Jupyterhub service.
Here is a solution I tried based on Jupyterhub docs:
sudo echo "c.Spawner.port = 35289
c.Spawner.ip = '127.0.0.1'" >> /etc/jupyterhub/jupyterhub_config.py
But this did not work as the port was again set by the Spawner randomly. I think there is a way to fix this. Any help on this would be greatly appreciated. Thanks
I have next configuration: remote Gremlin server (TinkerPop 3.2.6) with Janus GraphDB
I have gremlin-console (with janus plugin) + conf in remote.yaml:
hosts: [10.1.3.2] # IP og gremlin-server host
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
So I want to make connection through gremlin-server (not to JanusGraph directly by graph = JanusGraphFactory.build().set("storage.backend", "cassandra").set("storage.hostname", "127.0.0.1").open();) and get graph which supports transactions?
Is it possible? Because as I see all TinkerFactory graphs do not support transactions
As I understood to use the Janus graph through the gremlin server you should:
Define ip&port in the config file of the gremlin-console:
conf/remote.yaml
Connect by Gremlin-console to the gremlin server:
: remote connect tinkerpop.server conf/remote.yaml
==> Configured localhost/10.1.23.113: 8182
...and work in remote mode (using :> or :remote console), i.e. send ALL commands (or #script) to the gremlin-server.
:> graph.addVertex(...)
or
:remote console
==>All scripts will now be sent to Gremlin Server - [10.1.2.222/10.1.2.222:818]
graph.addVertex(...)
You don't need to define variables for the graph and the trawersal, but rather used
graph. - for the graph
g. - for the traversal
In this case, you can uses all graph features that are provided by the JanusGraphDB.
Tinkerpop provides Cluster object to keep the config of connection. Using Cluster object graphTraversalSource object can be spawned.
this.cluster = Cluster.build()
.addContactPoints("192.168.0.2","192.168.0.1")
.port(8082)
.credentials(username, password)
.serializer(new GryoMessageSerializerV1d0(GryoMapper.build().addRegistry(JanusGraphIoRegistry.getInstance())))
.maxConnectionPoolSize(8)
.maxContentLength(10000000)
.create();
this.gts = AnonymousTraversalSource
.traversal()
.withRemote(DriverRemoteConnection.using(cluster));
gts object is thread safe. With remote each query will be executed in separate transaction. Ideally gts should be a singleton object.
Make sure to call gts.close() and cluster.close() upon shutdown of application else it may lead to connection leak.
I believe that connecting a java application to a running gremlin server using withRemote() will not support transactions. I have had trouble finding information on this as well but as far as I can tell, if you want to do anything but read the graph, you need to use "embedded janusgraph" and have your remotely hosted persistent data stored in a "storage backend" that you connect to from your application as you describe in the second half of your question.
https://groups.google.com/forum/#!topic/janusgraph-users/t7gNBeWC844
Some discussion I found around it here ^^ makes a mention of it auto-committing single transactions in remote mode, but it doesn't seem to do that when I try.
For one of my home projects I decided to use docker containers and fig for orchestration (first time using those tools).
Here is my fig.yaml:
rabbitmq:
image: dockerfile/rabbitmq:latest
mongodb:
image: mongo
app:
build: .
command: python /code/app/main.py
links:
- rabbitmq
- mongodb
volumes:
- .:/code
Rabbitmq starting time is much slower than loading time of my application. Even though rabbitmq container starts loading first (since it is in app links), when my app tries to connect to rabbitmq server it's not yet available (it's definately loading timing problem, since if I just insert sleep for 5 seconds before connecting to rabbitmq - everything works fine). Is there some standard way to resolve loading time synchronisation problems?
Thanks.
I don't think there is an standard way to solve this, but it is a known problem and some people have acceptable workarounds.
There is a proposal on the Docker issue tracker about not considering a container as started until it is listening at the exposed ports. However it likely won't be accepted due to other problems it would create elsewhere. There is a fig proposal on the same topic as well.
The easy solution is to do the sleep like #jcortejoso says. An example from http://blog.chmouel.com/2014/11/04/avoiding-race-conditions-between-containers-with-docker-and-fig/:
function check_up() {
service=$1
host=$2
port=$3
max=13 # 1 minute
counter=1
while true;do
python -c "import socket;s = socket.socket(socket.AF_INET, socket.SOCK_STREAM);s.connect(('$host', $port))" \
>/dev/null 2>/dev/null && break || \
echo "Waiting that $service on ${host}:${port} is started (sleeping for 5)"
if [[ ${counter} == ${max} ]];then
echo "Could not connect to ${service} after some time"
echo "Investigate locally the logs with fig logs"
exit 1
fi
sleep 5
(( counter++ ))
done
}
And then use check_up "DB Server" ${RABBITMQ_PORT_5672_TCP_ADDR} 5672 before starting your app server, as described in the link above.
Another option is to use docker-wait. In your fig.yml.
rabbitmq:
image: dockerfile/rabbitmq:latest
mongodb:
image: mongo
rabbitmqready:
image: aanand/wait
links:
- rabbitmq
app:
build: .
command: python /code/app/main.py
links:
- rabbitmqready
- mongodb
volumes:
- .:/code
Similar problems I have encountered I have solved using a custom script set up as CMD in my Dockerfiles. Then you can run any check command you wish (sleep for a time, or waiting to the service be listening, for example). I think there is not a standard way to do this, anyway I think the best way would be the application run could be able to ask the external service to be up and running, and the connect to them, but this is not possible in most cases.
For testing on our CI, we built a small utility that can be used in a Docker container to wait for linked services to be ready. It automatically finds all linked TCP services from their environment variables and repeatedly and concurrently tries to establish TCP connections until it succeeds or times out.
We also wrote a blog post describing why we built it and how we use it.
I created two sepaerate directories in which I installed the Standalone Mule ESB server:
/ee/mmc-distribution-mule-console-bundle-3.5.2-HF1
/ee2/mmc-distribution-mule-console-bundle-3.5.2-HF1
I start up the first server, and below is the status:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./status.sh
MMC is running as PID=1998.
Mule Enterprise Edition is running as PID=2619.
Then I try to start the second instance:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./startup.sh
Port 8585 is in use, please make it available and try again.
So apparently the port 8585 is being used by the original instnace
So I stop the first instnace, and start the second istance, which comes up successfully, as follows:
./startup.sh
Please enter the desired port for Mule [Default 7777]:
Starting MMC, please wait...
class com.sun.jersey.multipart.impl.MultiPartConfigProvider
class com.sun.jersey.multipart.impl.MultiPartReader
class com.sun.jersey.multipart.impl.MultiPartWriter
[11-13 16:49:19] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-1]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
[11-13 16:49:32] WARN HttpMethodBase [http-bio-8585-exec-12]: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
[11-13 16:49:38] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-12]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
Nov 13, 2014 4:49:50 PM org.apache.catalina.core.StandardServer await
INFO: A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Nov 13, 2014 4:49:50 PM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-bio-8585"]
But notice it seems to be using 8585 for tomcat (of which I know little about, except it some sort of app server, never used it)
I examined this site:
http://www.mulesoft.org/documentation/display/33X/Running+Multiple+Mule+Instances
but it does nto discuss the issue., and the page it points do does not seem current. Did I misunderstand something
Is it possible to run two separate instances of Mule ESB at the same time
and if so, how ? (how would I change the port its using, what file should I modify)
Thanks
Edit: my second post in response to answer:
(BTW: I am using Mule ESB standalone Enterprise Edition 3.5.2)
To make sure I did not have any apps that were running
on port 8585, I shutdown my original instance, and created two new instances, and made sure no apps were deployed to either instance.
I brought up the first instance without issue, but the second instance I brought up still gives me the port 8585 in use error (from startup.sh)
This site says that the MMC default port is 7777, but the tomcat default port on which it runs is 8585
http://www.mulesoft.org/documentation/display/current/Setting+Up+MMC-Mule+ESB+Communications
I used the following command to find all files within my second instance of por t 8585
find . -type f |xargs grep "8585
Other than log files I got two hits
startup.sh
and
/mmc-3.5.2-HF1/apache-tomcat-7.0.52/conf/server.xml
I did NOT find in either instance the $MULE_HOME/apps/mmc/mule-config.xml (probably because I have no apps deployed)
In the server.xml, the MMC apparently uses tomcat to
handle the MMC applicaiton, and server.xml contains
the following:
<Connector port="8585" protocol="HTTP/1.1"
So I guess I could change 8585 to 8586 at this point, but ...
The startup.sh has serveral (about 9 or 10) hardcode dreferences to 8585 to check that the MMC is running and take action if it is or is not running
So do I actually have to change the entire startup.sh to replace 8585 with 8586 i the second instance as well as change the server.xml port 8585 reference ?
Thanks
You can run as many instances as you want, as long they don't use the same ports. Looks like you are deploying something in port 8585, so in the second instance you have to select a different port.
Is that port being used in any application that you developed and deployed in the Mule runtime?
Also, if you are using the Mule runtime with the MMC agent activated, you also have to change the port for the agent in the second instance. I think you can do that in the /conf/wrapper.conf or by passing to the startup script the following parameter:
-Dmule.mmc.bind.port=7778
(or any port that is free).
You can run as many as you want.
In MMC we can able to deploy and run many applications each applications has its own instance
I was doing the failover testing of mongodb on my local environment. I have two mongo servers(hostname1, hostname2) and an arbiter.
I have the following configuration in my mongoid.yml file
localhost:
hosts:
- - hostname1
- 27017
- - hostname2
- 27017
database: myApp_development
read: :primary
use_activesupport_time_zone: true
Now when I start my rails application, everything works fine, and the data is read from the primary(hostname1). Then I kill the mongo process of the primary(hostname1), so the secondary(hostname2) becomes the primary and starts serving the data.
Then after some time I start the mongo process of hostname1 then it becomes the secondary in the replica set.
Now the primary(hostname2) and secondary(hostname1) are working all right.
The real problem starts here.
I kill the mongo process of my new primary(hostname2), but this time, the secondary(hostname1) does not become the primary, and any further requests to the rails application raises the following error
Cannot connect to a replica set using seeds hostname2
Please help. Thanks in advance.
** UPDATE: **
I entered some loggers in the mongo repl_connection class, and came across this.
When I boot the rails app, I have both the hosts in the seeds array, that the mongo driver keeps track of. But during the second failover only the host that went down is present in this array.
Hence I would also like to know, how and when one of the hosts get removed from the seed list.