Javamelody and multiple app and jvm in same node - jvm

we have 3 applications (app1/app2/app3) in cluster (server1/server2) with 2 jvms (8080/8180) in each node
for example
http://server1:8080/app1, http://server1:8080/app2,
http://server1:8080/app3
http://server1:8180/app1, http://server1:8180/app2,
http://server1:8180/app3
http://server2:8080/app1, http://server2:8080/app2,
http://server2:8080/app3
http://server2:8180/app1, http://server2:8180/app2,
http://server2:8180/app3
We can't override the path to record datas, what it is possible in web.xml set app1/app2/app3 in storage path but on same server app1 on port 8080 and 8081 will save files to same folder
the -D option is not a valuable option, because we can specify by jvm specific parameters, but if we put
"-Djavamelody.storage-directory=/tmp/javamelody_my_instance" as ticket
692 in github mentionned
it will override app1 with app2 or app2 with app3 .... in each case it will cause issue
overwrite file is not good how we can monitor each app in each JVM ?
any idea ?

The response i had in github was OK
in each node configuration:
-Djavamelody.storage-directory=C:\Windows\Temp\javamelody_[port_number]
i will have folders
javamelody_8080_app1, javamelody_8180_app1
javamelody_8080_app2, javamelody_8180_app2
javamelody_8080_app3, javamelody_8180_app3

Related

Spring cloud config git refreshRate behaviour

I am trying to setup Spring Cloud Config Server and want to enable auto refresh of properties based on changes to the backing git repository.
Below is the bootstrap.yml of the server.
server:
port: 8080
spring:
application:
name: my-configserver
cloud:
config:
server:
bootstrap: true
git:
uri: /Users/anoop/Documents/centralconfig
refreshRate: 15
searchPaths: {application}/properties
bus:
enabled: true
As per the documentation spring.cloud.config.server.git.refreshRate determines
how often the config server will fetch updated configuration data from
your Git backend
I see that the config clients are not notified of changes, when the configuration changes. I have not configured a git hook for this and was hoping that just setting the property would do the job.
Anoop
Since you have configured the refreshRate property, whenever config client (other applications) call config server to fetch properties (this happens when either the application starts or application calls /actuator/refresh endpoint), they will get properties which were fetched 15 seconds (your refreshRate) old.
By default the refreshRate property is set to 0, meaning any time client applications ask for property config server will fetch latest from GIT.
I don't think there is any property which lets your client apps get notified in case of change/commits in the GIT. This is something your app needs to do by calling actuator/refresh endpoint. This can be done programmatically using some scheduler (though I wouldn't recommend that).
By default, the config client just reads the properties in git repo at startup and not again.
You can actually have a way to workaround by force bean to refresh its configuration from the config server.
First, you need to add #RefreshScope annotation in the bean where config needs to be reloaded.
Second, enable spring boot actuator in application.yml of config client.
# enable dynamic configuration changes using spring boot actuator
management:
endpoints:
web:
exposure:
include: '*'
And then config a scheduled job (by using #Scheduled annotation with fixedRate,...). Of course, fixedRate should conform with refreshRate from config server.
And inside that job, it will execute the request as below:
curl -X POST http://username:password#localhost:8888/refresh
Then your config client will be notified changes in config repo every fixRate interval.
The property spring.cloud.config.server.git.refreshRate is configured in the Config Server and controls how frequently it is going to pull updates, if any, from Git. Otherwise, the Config Server's default behaviour is to connect to the Git repo only when some client requests its configuration.
Git Repo -> Config Server
It has no effect in the communication between the Config Server and its clients.
Config Server -> Spring Boot app (Config Server clients)
Spring Boot apps built with Config Server clients pull all configuration from the Config Server during their startup. To enable them to dynamically change the initially loaded configuration, you need to perform the following steps in your Spring Boot apps aka Config Server clients:
Include Spring Boot Actuator in the classpath, for example using Gradle:
implementation 'org.springframework.boot:spring-boot-starter-actuator'
Enable the actuator refresh endpoint via the management.endpoints.web.exposure.include=refresh property
Annotate the Beans holding the properties in your code with #RefreshScope, for example:
#RefreshScope
#Service
public class BookService {
#Value("${book.default-page-size:20}")
private int DEFAULT_PAGE_SIZE;
//...
}
With the application running, commit some property change into the repo:
git commit -am "Testing property changes"
Trigger the updating process by sending a HTTP POST request to the refresh endpoint, for example (using httpie and assuming your app is running locally at the port 8080:
http post :8080/actuator/refresh
and the response should be something like below indicating which properties, if any, have changed
HTTP/1.1 200
Connection: keep-alive
Content-Type: application/vnd.spring-boot.actuator.v3+json
Date: Wed, 30 Jun 2021 10:18:48 GMT
Keep-Alive: timeout=60
Transfer-Encoding: chunked
[
"config.client.version",
"book.default-page-size"
]

CSV Data Set : Parameterize URL variables in JMeter - wrong CSV file

I am testing backend application, which is in NodeJS and Java technology.
The communication protocols are:
WebSocket in NodeJs part
and HTTP in Java part)
In JMeter, I must parameterize URL, to switch between development URL, production and preproduction.
I did it using CSV file.
I created a folder containing CSVs, in the folder where I have JMeter 5.0.
I prepare 3 CSV files.
I have three csv file in folder bin in Jmeter such as:
development.csv,
production.csv.
prepod.csv
My CSV files are following:
protocol, host
http, 10.219.227.66
ws, 10.219.227.66
protocol, host
https, prepod.myprepod.io
ws, prepod.myprepod.io
protocol, host
https, production.myproduction.io
ws, production.myproduction.io
and I have set in JMeter:
WebSocket Open Connection
Serwer URL – ws
Server name or IP - ${host}
CSV Data Set Config
${__P(environment,development)}.csv
and this project doesn't work, in log I have:
Caused by: java.lang.IllegalArgumentException: File development.csv
must exist and be readable at
org.apache.jmeter.services.FileServer.createBufferedReader(FileServer.java:424)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:340)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.readLine(FileServer.java:324)
~[ApacheJMeter_core.jar:5.0 r1840935] at
org.apache.jmeter.services.FileServer.reserveFile(FileServer.java:272)
~[ApacheJMeter_core.jar:5.0 r1840935] ... 8 more 2018-10-19
14:29:30,727 INFO o.a.j.t.JMeterThread: Thread finished: Authorize
success 1-1 2018-10-19 14:29:30,728 INFO o.a.j.e.StandardJMeterEngine:
Notifying test listeners of end of test 2018-10-19 14:29:30,728 INFO
o.a.j.g.u.JMeterMenuBar: setRunning(false, local)
What is wrong ?
As per message:
java.lang.IllegalArgumentException: File development.csv must exist and be readable at ...
It seems the test is using the default value "development" , so JMeter looks for development.csv
So I guess you're facing this in another environment, in this case you should run jmeter with this additional parameter:
-Jenvironment=production

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

Liferay 6.2 clustering issue with multicast

I am trying to cluster ehcache and lucene with Liferay 6.2 EE sp2 bundle on 2 servers with mutlicast enabled. WE have Apache HTTPD servers fronting tomcat servers using reverse proxy. A valid 6.2 license is deployed on both the nodes.
We user the following properties in the portal-ext.properties:
cluster.link.enabled=true
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
# Since we are using SSL on the frontend
web.server.protocol=https
# set this to any server that is visible to both the nodes
cluster.link.autodetect.address=dbserverip:dbport
#ports and ips we know work in our environment for multicast
multicast.group.address["cluster-link-control"]=ip
multicast.group.port["cluster-link-control"]=port1
multicast.group.address["cluster-link-udp"]=ip
multicast.group.port["cluster-link-udp"]=port2
multicast.group.address["cluster-link-mping"]=ip
multicast.group.port["cluster-link-mping"]=port3
multicast.group.address["hibernate"]=ip
multicast.group.port["hibernate"]=port4
multicast.group.address["multi-vm"]=ip
multicast.group.port["multi-vm"]=port5
We are running into issues with the ehcache and lucene clustering not working. The following tests fail :
Moving a portlet on node 1, does not show up on node 2
There are no errors except for a startup error with lucene.
14:19:35,771 ERROR
[CLUSTER_EXECUTOR_CALLBACK_THREAD_POOL-1][LuceneHelperImpl:1186]
Unable to load index for company 10157
com.liferay.portal.kernel.exception.SystemException:
java.net.ConnectException: Connection refused at
com.liferay.portal.search.lucene.LuceneHelperImpl.getLoadIndexesInputStreamFromCluster(LuceneHelperImpl.java:488)
at
com.liferay.portal.search.lucene.LuceneHelperImpl$LoadIndexClusterResponseCallback.callback(LuceneHelperImpl.java:1176)
at
com.liferay.portal.cluster.ClusterExecutorImpl$ClusterResponseCallbackJob.run(ClusterExecutorImpl.java:614)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:579) at
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625) at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
We verified that the jgroups multicast works outside of liferay by running the following commands and using a downloaded copy of the jgroups.jar and replacing with the 5 multicast ips and ports.
Testing with JGROUPS
1) McastReceiver -
java -cp ./jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
2) McastSender -
java -cp ./jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
From there, typing things into the McastSender will result in the Receiver printing it out.
Thanks!
After a lot of troubleshooting and help from various folks in my team and at liferay support, we switched to using unicast and it worked a lot better.
Here is what we did:
Extracted jgroups.jar from the tomcat home/webappts/ROOT/WEB_INF/lib, saved locally.
Unzipped the jgroups.jar file and extracted and save the tcp.xml from the jar's WEB_INF folder
As a base line test, changed the section in the tcp.xml and saved
TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:servername1[7800],servername2[7800]}"
port_range="1"
num_initial_members="10"
Copy the tcp.xml to the liferay home on both the nodes
Change the portal-ext.properties to remove the mutlicast properties and add the following lines.
cluster.link.channel.properties.control=${liferay.home}/tcp.xml
cluster.link.channel.properties.transport.0=${liferay.home}/tcp.xml
Start node 1
start node 2
check logs
Do the cluster cache test:
Moving a portlet on node 1, shows up on node 2
Under control panel -> License manager both the nodes show up with valid licenses.
searching for user on node 2 after adding in node 1 in control panel -> user and organizations.
All of the above tests worked.
So we shutdown servers and changed the tcp.xml to use jdbc rather than the tcpping so we don't have to specify node names manually.
Step for the jdbc config:
Create the table in the liferay database manually.
CREATE TABLE JGROUPSPING (own_addr varchar(200) not null, cluster_name varchar(200) not null, ping_data blob default null, primary key (own_addr, cluster_name))
change tcp.xml and remove the tcpping section and add the following.
Note: Please replace the leading \ with less than symbol in the following code block. There are issues with the leading less than sign in the SO editor/parser hiding whatever comes after it:
\JDBC_PING datasource_jndi_name="java:comp/env/jdbc/LiferayPool"
initialize_sql="" />
Save and push the file manually to both the nodes.
Start the servers and repeat tests above.
It should work seamlessly.
It was invaluable to have the debug logging on for jgroups mentioned in the following the post:
https://bitsofinfo.wordpress.com/2014/05/21/clustering-liferay-globally-across-data-centers-gslb-with-jgroups-and-relay2/
tomcat home/webapps/ROOT/WEB-INF/classes/META-INF/portal-log4j-ext.xml file I used to triage various issues on bootup related to clustering.
<?xml version="1.0"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="com.liferay.portal.cluster">
<priority value="TRACE" />
</category>
<category name="com.liferay.portal.license">
<priority value="TRACE" />
</category>
We also found that the Lucene cluster replication startup errors were fixed in a fix pack and are getting a patch for it.
https://issues.liferay.com/browse/LPS-51714
https://issues.liferay.com/browse/LPS-51428
We added the following portal instance properties for lucene replication to work better between the 2 nodes:
portal.instance.http.port=port that the app servers listen on ex. 8080
portal.instance.protocol=http
Hope this helps someone.
Update
The lucene index load in a cluster issue was resolved by a Liferay 6.2 EE patch from support for the LPS's mentioned above.

Setting up Sahi, Behat & PhantomJS on Vagrant

I'm trying to set up automated testing with PhantomJS, Behat and Sahi on my vagrant machine.
I'm getting the following output, when trying to run a test with behat:
[Behat\SahiClient\Exception\ConnectionException]
Exception has been thrown in "afterStep" hook, defined in FeatureContext::afterStep()
Connection time limit reached
Here is my userdata.properties:
# dirs. Relative paths are relative to userdata dir. Separate directories with semi-colon
scripts.dir=scripts;
# default log directory.
logs.dir=logs
# Directory where auto generated ssl cerificates are stored
certs.dir=certs
# Use external proxy server for http
ext.http.proxy.enable=false
ext.http.proxy.host=
ext.http.proxy.port=
ext.http.proxy.auth.enable=false
ext.http.proxy.auth.name=kamlesh
ext.http.proxy.auth.password=password
# Use external proxy server for https
ext.https.proxy.enable=false
ext.https.proxy.host=
ext.https.proxy.port=
ext.https.proxy.auth.enable=false
ext.https.proxy.auth.name=kamlesh
ext.https.proxy.auth.password=password
# There is only one bypass list for both secure and insecure.
ext.http.both.proxy.bypass_hosts=localhost|127.0.0.1|*.internaldomain.com
# Mark this property true to disable the proxy alert
proxy_alert.disabled=false
And my browswer_types.xml:
<browserTypes>
<browserType>
<name>phantomjs</name>
<displayName>PhantomJS</displayName>
<icon>safari.png</icon>
<path>/usr/bin/phantomjs</path>
<options>--ignore-ssl-errors=yes --proxy=localhost:9999 --ssl-protocol=any /usr/local/sahi/phantomjs-sahi.js</options>
<processName>phantomjs</processName>
<capacity>100</capacity>
<force>true</force>
</browserType>
</browserTypes>
behat.yml:
default:
extensions:
Behat\MinkExtension\Extension:
javascript_session: sahi
browser_name: phantomjs
goutte: ~
sahi:
host: localhost
port: 9999
Sahi run output:
--------
SAHI_HOME: ..
SAHI_USERDATA_DIR: ../userdata
SAHI_EXT_CLASS_PATH:
--------
Sahi properties file = /usr/local/sahi/config/sahi.properties
Sahi user properties file = /usr/local/sahi/userdata/config/userdata.properties
Added shutdown hook.
>>>> Sahi OS v5.0 started. Listening on port: 9999
>>>> Configure your browser to use this server and port as its proxy
>>>> Browse any page and CTRL-ALT-DblClick on the page to bring up the Sahi Controller
-----
Reading browser types from: /usr/local/sahi/userdata/config/browser_types.xml
-----
I've tried reinstalling a bunch of stuff, tried playing around with the ports, processes, proxy settings, nothing.
your vagrant comes with an empty or no db. so when you try to connect to your app, e.g log in with some known user it will crash cause it won't find it!
all the best ;)
Since version 4.3.2 of BrowserType change settings. Since there is no tag force. please check.
https://sahipro.com/docs/using-sahi/sahi-headless-execution-with-phantomjs.html#Documentation since Sahi Pro V4.3.2