How to log Ip address in wso2esb - axis2

Hi am working with WSO2 ESB 4.7.0
I want to log client Ip address in the proxy, So i have set a property in my proxy as shown below
<property name="client_ip_address"
expression="get-property('axis2','REMOTE_ADDR')"
scope="default"
type="STRING"/>
<log level="custom">
<property name="client_ip_address" expression="get-property('client_ip_address')"/>
</log>
When i run the proxy the log is generated as
[2015-09-05 12:21:19,582] INFO - LogMediator client_ip_address = 127.0.0.1
It is not returning me the actual Ip address of the client instead it is returning me 127.0.0.1.
how can i get actual Ip address in the log.
Thanks..!!

127.0.0.1 is the localhost address because you call the proxy from the same machine that WSO2 is running on.
Please call the proxy from a different machine than the machine running WSO2. Then you should see the effective IP address of the calling client.

Related

Hawtio 2.10.0 is duplicating the URL for jolokia

I have ActiveMQ 5.15.13 running in my localhost with jolokia without any problem:
# wget --user admin --password admin --header "Origin: http://localhost" --auth-no-challenge http://localhost:8161/api/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost
--2020-06-22 14:49:15-- http://localhost:8161/api/jolokia/read/org.apache.activemq:type=Broker,brokerName=localhost
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:8161... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/plain]
Saving to: ‘org.apache.activemq:type=Broker,brokerName=localhost.2’
org.apache.activemq:type=Broker,brokerName=localhost.2 [ <=> ] 2,24K --.-KB/s in 0s
2020-06-22 14:49:15 (175 MB/s) - ‘org.apache.activemq:type=Broker,brokerName=localhost.2’ saved [2291]
Hawtio 2.10.0 looks like it's ok, but when I try to connect to ActiveMQ I receive this message:
This Jolokia endpoint is unreachable. Please check the connection details and try again.
I checked network inspector and I guess that's the problem:
Request URL: http://localhost:8161/hawtio/proxy/http/localhost/8161/api/jolokia/
After some changes in the URL I noticed that there's a hardcode part of the URL:
http://localhost:8161/hawtio/proxy/
That part is always there, no matter what I do and the other part:
http/localhost/8161/api/jolokia/
Change always I change the settings, but for some reason it's became a query strings instead of be the expected URL:
http://localhost:8161/api/jolokia/
That's are the options I'm using:
ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS_MEMORY -Dhawtio.disableProxy=true -Dhawtio.realm=activemq -Dhawtio.role=admins -Dhawtio.rolePrincipalClasses=org.apache.activemq.jaas.GroupPrincipal -Djava.util.logging.config.file=logging.properties -Djava.security.auth.login.config=$ACTIVEMQ_CONF/login.config"
How can I fix this issue?
Thanks in advance.
After review a lot of "the same" procedure to install Hawtio with ActiveMQ, questions everywhere I could find it and review the documentation for both ActiveMQ and Hawtio, I could finally found some information, from 6 years ago, that suggested an "extra step" when use Hawtio with ActiveMQ which fixed my issue.
I may be wrong, but from my point of view Hawtio have a highlander bug that use the HOST URL as base, instead of the SETUP CONNECTION URL that is created, to fix that problem, just need to add the following lines into <ACTIVEMQ PATH>/conf/jetty.xml:
<bean class="org.eclipse.jetty.webapp.WebAppContext">
<property name="contextPath" value="/hawtio" />
<property name="resourceBase" value="${activemq.home}/webapps/hawtio" />
<property name="logUrlOnStart" value="true" />
</bean>
That's should be inside of:
<bean id="secHandlerCollection" class="org.eclipse.jetty.server.handler.HandlerCollection">
<property name="handlers">
<list>
<ref bean="rewriteHandler"/>

Apache Ignite - Node running on remote machine not discovered

Apache Ignite Version is: 2.1.0
I am using TcpDiscoveryVmIpFinder to configure the nodes in an Apache Ignite cluster to setup a compute grid. Below is my configuration which is nothing but the example-default.xml, edited for the IP addresses:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<!--
Ignite provides several options for automatic discovery that can be used
instead os static IP based discovery. For information on all options refer
to our documentation: http://apacheignite.readme.io/docs/cluster-config
-->
<!-- Uncomment static IP finder to enable static-based discovery of initial nodes. -->
<!--<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">-->
<!-- <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.multicast.TcpDiscoveryMulticastIpFinder"> -->
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>xxx.40.16.yyy:47500..47509</value>
<value>xx.40.16.zzz:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
If I start multiple nodes on individual machine, the nodes on respective machines discover each other and form a cluster. But, the nodes on the remote machines do not discover each other.
Any advise will be helpful...
First of all, make sure that you really use this config file and not a default config. With default configuration, nodes can find each other only on the same machine.
Once you've checked it, you also need to test that it's possible to connect from host 106.40.16.64 to 106.40.16.121(and vice versa) via 47500..47509 ports. It's possible that there is a firewall blocked connections or these ports is simply closed.
For example, it's possible to check it with netcat, run this from 106.40.16.64 host:
nc -z 106.40.16.121 47500

Infinispan and jgroups Implementation on Openshift Stack

Question-1
I am trying run an Infinispan cluster on Openshift Tomcat gears. With nodes sitting on two different hosts.I am using TCP as the data transfer protocol and MPING as the discovery protocol.
If I try to use any of the JGROUPS provided key word for bind address like GLOBAL, SITE_LOCAL, LINK_LOCAL, NON_LOOPBACK, match-interface, match-host, match-address, except for LOOPBACK it binds the service to public IP(64.X.X.X) and for LOOPBACK it binds it to 127.0.0.1.This is not what I want to achieve.
I want it to run the JGROUPS service on the custom IP address provided by Openshift which looks some what like this 127.2.155.1. If I am able to run it in the given IP then it will be easy for me to write Port forwarding rules so that the cluster members will be able to discover each other even if they exist in different hosts.
Using environment property
Map<String, String> envKeys = System.getenv();
for (String keys : envKeys.keySet()) {
System.out.println(keys + ":" + envKeys.get(keys));
if (keys.equalsIgnoreCase("OPENSHIFT_JBOSSEWS_IP")) {
System.setProperty("OPENSHIFT_JBOSSEWS_IP", envKeys.get(keys));
}
}
It fails while doing above saying could not find the IP address or 127.2.155.1 is an invalid IP address. Please find the sample jgroups.xml I am using in my project.
<config xmlns="urn:org:jgroups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema /JGroups-3.4.xsd">
<TCP
bind_addr="${OPENSHIFT_JBOSSEWS_IP}"
bind_port="${jgroups.tcp.port:7800}"
port_range="0"
recv_buf_size="20m"
send_buf_size="640k"
max_bundle_size="31k"
use_send_queues="true"
enable_diagnostics="false"
bundler_type="sender-sends-with-timer"
thread_naming_pattern="pl"
thread_pool.enabled="true"
thread_pool.min_threads="2"
thread_pool.max_threads="30"
thread_pool.keep_alive_time="60000"
thread_pool.queue_enabled="true"
thread_pool.queue_max_size="100"
thread_pool.rejection_policy="Discard"
oob_thread_pool.enabled="true"
oob_thread_pool.min_threads="2"
oob_thread_pool.max_threads="30"
oob_thread_pool.keep_alive_time="60000"
oob_thread_pool.queue_enabled="false"
oob_thread_pool.queue_max_size="100"
oob_thread_pool.rejection_policy="Discard"
internal_thread_pool.enabled="true"
internal_thread_pool.min_threads="2"
internal_thread_pool.max_threads="4"
internal_thread_pool.keep_alive_time="60000"
internal_thread_pool.queue_enabled="true"
internal_thread_pool.queue_max_size="100"
internal_thread_pool.rejection_policy="Discard"
/>
<!-- Ergonomics, new in JGroups 2.11, are disabled by default in TCPPING until JGRP-1253 is resolved -->
<!--
<TCPPING timeout="3000"
initial_hosts="localhost[7800],localhost[7801]"
port_range="5"
num_initial_members="3"
ergonomics="false"
/>
-->
<MPING bind_addr="{OPENSHIFT_JBOSSEWS_IP}"
break_on_coord_rsp="true"
mcast_addr="${jgroups.mping.mcast_addr:228.2.4.6}"
mcast_port="${jgroups.mping.mcast_port:43376}"
ip_ttl="${jgroups.udp.ip_ttl:2}"
num_initial_members="3"/>
<MERGE3/>
<FD_SOCK/>
<FD timeout="3000" max_tries="5"/>
<VERIFY_SUSPECT timeout="1500"/>
<pbcast.NAKACK2 use_mcast_xmit="false"
xmit_interval="1000"
xmit_table_num_rows="100"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"/>
<UNICAST3 xmit_interval="500"
xmit_table_num_rows="20"
xmit_table_msgs_per_row="10000"
xmit_table_max_compaction_time="10000"
max_msg_batch_size="100"
conn_expiry_timeout="0"/>
<pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1m"/>
<pbcast.GMS print_local_addr="false" join_timeout="3000" view_bundling="true"/>
<tom.TOA/> <!-- the TOA is only needed for total order transactions-->
<MFC max_credits="2m" min_threshold="0.40"/>
<FRAG2 frag_size="30k"/>
<RSVP timeout="60000" resend_interval="500" ack_on_delivery="false" />
</config>
Question-2
When Infinispan is successfully started, It runs 2 java processes one on the port 7800 as mentioned in the above config file and other on a port number randomly picked up by Infinispan. I would like to understand more about the processes.
**COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME**
java 5640 1334 44u IPv4 1556890779 0t0 TCP 127.2.155.1:7800 (LISTEN)
java 5640 1334 44u IPv4 1556890779 0t0 TCP 127.2.155.1:20772 (LISTEN)
Question-1:
Setting bind_addr to bind_addr=match-address=127.2.\* should pick the correct address (127.2.155.1).
Does 127.2.155.1 show when you do an ifconfig?
Question-2:
The second port is probably opened by FD_SOCK. You can control this by setting attributes bind_addr and start_port in FD_SOCK.
[1] http://www.jgroups.org/manual/index.html#CommonProps

Could not collect remote Gemfire Cache Server

I am working on Gemfire and Spring data caching. I have successfully startup a local cache server from Spring. But I could not connect to a remote cache server with following configuration. But I can connect remote server using Gfsh--> connect --locator = remoter IP[10334]
<gfe:client-cache id="client-cache" pool-name="my-pool">
</gfe:client-cache>
<gfe:pool id="my-pool" subscription-enabled="true">
<gfe:locator host="remote ip" port="10334" />
</gfe:pool>
<gfe:client-region id="Customer" name="Customer" cache-ref="client-cache">
<gfe:cache-listener>
<bean class="com.demo.util.LoggingCacheListener" />
</gfe:cache-listener>
</gfe:client-region>
<bean id="cacheManager"
class="org.springframework.data.gemfire.support.GemfireCacheManager">
<property name="regions">
<set>
<ref bean="Customer" />
</set>
</property>
</bean>
The issue log is "Unable to prefill pool to minimum because:
com.gemstone.gemfire.cache.client.NoAvailableLocatorsException: Unable to connect to any locators in the list [remoeserver:10334]"
After I started another locator and server on my desktop, Spring can connect the cluster. But it said the region did not exist when Spring #Cachable is fired. The error log is Request processing failed; nested exception is "Region named /Customer/Customer was not found during get request". The region name should be /Customer.
A client region is merely a proxy to a master region (e.g., partitioned or replicated region) configured on a cache server. The server must also be configured use the same locator address.

Connecting to MS SQL Server 2005 via Hibernate

I have JRE 1.6 and am using the following hibernate.cfg.xml file. I am always getting "Cannot open connection" and "The port number 1433/DB is not valid."
<hibernate-configuration>
<session-factory>
<property name="hibernate.connection.driver_class">com.microsoft.sqlserver.jdbc.SQLServerDriver</property>
<property name="hibernate.connection.url">jdbc:sqlserver://IP/DB</property>
<property name="hibernate.connection.username"></property>
<property name="hibernate.connection.password"></property>
<property name="hibernate.connection.pool_size">10</property>
<property name="show_sql">true</property>
<property name="dialect">org.hibernate.dialect.SQLServerDialect</property>
<property name="hibernate.hbm2ddl.auto">update</property>
</session-factory>
</hibernate-configuration>
From the official documentation:
Building the Connection URL
The general form of the connection URL is
jdbc:sqlserver://[serverName[\instanceName][:portNumber]][;property=value[;property=value]]
where:
jdbc:sqlserver:// (Required) is known as the sub-protocol and is constant.
serverName (Optional) is the address of the server to connect to. This could be a DNS or IP address, or it could be localhost or 127.0.0.1 for the local computer. If not specified in the connection URL, the server name must be specified in the properties collection.
instanceName (Optional) is the instance to connect to on serverName. If not specified, a connection to the default instance is made.
portNumber (Optional) is the port to connect to on serverName. The default is 1433. If you are using the default, you do not have to specify the port, nor its preceding ':', in the URL.
property (Optional) is one or more option connection properties. For more information, see Setting the Connection Properties. Any property from the list can be specified. Properties can only be delimited by using the semicolon (';'), and they cannot be duplicated.
So use the following instead:
jdbc:sqlserver://IP;databaseName=DB
You are connecting to host/instance. It should be a backslash: host\instance. Are you mixing up the concepts of instance and databases?
host\instance;databaseName=DB