How to suppress/control logging of Wagon-FTP Maven extension? - maven-2

I'm deploying Maven site by FTP, using Wagon-FTP. Works fine, but output is full of FTP connection/authentication details, which effectively expose logins and passwords to everybody (especially if the project is open source and its CI protocols are publicly accessible):
[...]
[INFO]
[INFO] --- maven-site-plugin:3.0-beta-3:deploy (default-deploy) # rempl ---
Reply received: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ----------
220-You are user number 1 of 50 allowed.
220-Local time is now 09:08. Server port: 21.
220 You will be disconnected after 15 minutes of inactivity.
Command sent: USER ****
Reply received: 331 User **** OK. Password required
Command sent: PASS ********
Reply received: 230-User **** has group access to: ***
230 OK. Current restricted directory is /
[...]
Is it possible to suppress this logging? Or configure it... This is a section of my pom.xml, where Wagon-FTP is used:
[...]
<build>
<extensions>
<extension>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ftp</artifactId>
<version>1.0-beta-7</version>
</extension>
</extensions>
[...]
</build>
[...]

Not possible, and basically it is related to maven site plugin and not the wagon ftp (which is only a simple adapter for the apache-commons-net ftp client). See the source of AbstractDeployPlugin from line 310.
Debug debug = new Debug();
wagon.addSessionListener( debug );
wagon.addTransferListener( debug );
Where Debug is using the standard output.
IMHO the nice solution would be to use a more sophisticated SessionListener or a flag to avoid addSessionListener(debug) if not needed in the Wagon source.

Related

KEYCLOAK unknown authenticator error - Client Adapter installed

I am trying to setup Keycloak with Tomcat 8.
I followed the instruction carefully. I downloaded the Client Adapter for Tomcat8 and copied all the jar into $CATALINA_HOME/lib directory. I modified my web.xml login-config to use KEYCLOAK. Yet when I started Tomcat I kept getting Severe Error Unknown Authenticator??
Anywhere I googled everyone said you have to install the Client Adapter but.. in my case IT IS ALREADY THERE!!! HELP!!
I think you forgot to create a META-INF directory beside the WEB-INF and put a file named context.xml into it:
The contents of this file has to be
<?xml version="1.0" encoding="UTF-8"?>
<Context>
<Valve className="org.keycloak.adapters.tomcat.KeycloakAuthenticatorValve"/>
</Context>
This is not needed if you deploy the war into WildFly. I had the same problem when I tried to transfer a well working webApp from WildFly to Tomcat.

Liferay 6.2 clustering issue with multicast

I am trying to cluster ehcache and lucene with Liferay 6.2 EE sp2 bundle on 2 servers with mutlicast enabled. WE have Apache HTTPD servers fronting tomcat servers using reverse proxy. A valid 6.2 license is deployed on both the nodes.
We user the following properties in the portal-ext.properties:
cluster.link.enabled=true
lucene.replicate.write=true
ehcache.cluster.link.replication.enabled=true
# Since we are using SSL on the frontend
web.server.protocol=https
# set this to any server that is visible to both the nodes
cluster.link.autodetect.address=dbserverip:dbport
#ports and ips we know work in our environment for multicast
multicast.group.address["cluster-link-control"]=ip
multicast.group.port["cluster-link-control"]=port1
multicast.group.address["cluster-link-udp"]=ip
multicast.group.port["cluster-link-udp"]=port2
multicast.group.address["cluster-link-mping"]=ip
multicast.group.port["cluster-link-mping"]=port3
multicast.group.address["hibernate"]=ip
multicast.group.port["hibernate"]=port4
multicast.group.address["multi-vm"]=ip
multicast.group.port["multi-vm"]=port5
We are running into issues with the ehcache and lucene clustering not working. The following tests fail :
Moving a portlet on node 1, does not show up on node 2
There are no errors except for a startup error with lucene.
14:19:35,771 ERROR
[CLUSTER_EXECUTOR_CALLBACK_THREAD_POOL-1][LuceneHelperImpl:1186]
Unable to load index for company 10157
com.liferay.portal.kernel.exception.SystemException:
java.net.ConnectException: Connection refused at
com.liferay.portal.search.lucene.LuceneHelperImpl.getLoadIndexesInputStreamFromCluster(LuceneHelperImpl.java:488)
at
com.liferay.portal.search.lucene.LuceneHelperImpl$LoadIndexClusterResponseCallback.callback(LuceneHelperImpl.java:1176)
at
com.liferay.portal.cluster.ClusterExecutorImpl$ClusterResponseCallbackJob.run(ClusterExecutorImpl.java:614)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask._runTask(ThreadPoolExecutor.java:682)
at
com.liferay.portal.kernel.concurrent.ThreadPoolExecutor$WorkerTask.run(ThreadPoolExecutor.java:593)
at java.lang.Thread.run(Thread.java:745) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
at
java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
at
java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at
java.net.Socket.connect(Socket.java:579) at
sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:625) at
sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:160)
at sun.net.NetworkClient.doConnect(NetworkClient.java:180) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:432) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:527) at
sun.net.www.protocol.https.HttpsClient.(HttpsClient.java:275)
at sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:371)
We verified that the jgroups multicast works outside of liferay by running the following commands and using a downloaded copy of the jgroups.jar and replacing with the 5 multicast ips and ports.
Testing with JGROUPS
1) McastReceiver -
java -cp ./jgroups.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastReceiverTest -mcast_addr 224.10.10.10 -port 5555
2) McastSender -
java -cp ./jgroups.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
ex. java -cp jgroups-final.jar org.jgroups.tests.McastSenderTest -mcast_addr 224.10.10.10 -port 5555
From there, typing things into the McastSender will result in the Receiver printing it out.
Thanks!
After a lot of troubleshooting and help from various folks in my team and at liferay support, we switched to using unicast and it worked a lot better.
Here is what we did:
Extracted jgroups.jar from the tomcat home/webappts/ROOT/WEB_INF/lib, saved locally.
Unzipped the jgroups.jar file and extracted and save the tcp.xml from the jar's WEB_INF folder
As a base line test, changed the section in the tcp.xml and saved
TCPPING timeout="3000"
initial_hosts="${jgroups.tcpping.initial_hosts:servername1[7800],servername2[7800]}"
port_range="1"
num_initial_members="10"
Copy the tcp.xml to the liferay home on both the nodes
Change the portal-ext.properties to remove the mutlicast properties and add the following lines.
cluster.link.channel.properties.control=${liferay.home}/tcp.xml
cluster.link.channel.properties.transport.0=${liferay.home}/tcp.xml
Start node 1
start node 2
check logs
Do the cluster cache test:
Moving a portlet on node 1, shows up on node 2
Under control panel -> License manager both the nodes show up with valid licenses.
searching for user on node 2 after adding in node 1 in control panel -> user and organizations.
All of the above tests worked.
So we shutdown servers and changed the tcp.xml to use jdbc rather than the tcpping so we don't have to specify node names manually.
Step for the jdbc config:
Create the table in the liferay database manually.
CREATE TABLE JGROUPSPING (own_addr varchar(200) not null, cluster_name varchar(200) not null, ping_data blob default null, primary key (own_addr, cluster_name))
change tcp.xml and remove the tcpping section and add the following.
Note: Please replace the leading \ with less than symbol in the following code block. There are issues with the leading less than sign in the SO editor/parser hiding whatever comes after it:
\JDBC_PING datasource_jndi_name="java:comp/env/jdbc/LiferayPool"
initialize_sql="" />
Save and push the file manually to both the nodes.
Start the servers and repeat tests above.
It should work seamlessly.
It was invaluable to have the debug logging on for jgroups mentioned in the following the post:
https://bitsofinfo.wordpress.com/2014/05/21/clustering-liferay-globally-across-data-centers-gslb-with-jgroups-and-relay2/
tomcat home/webapps/ROOT/WEB-INF/classes/META-INF/portal-log4j-ext.xml file I used to triage various issues on bootup related to clustering.
<?xml version="1.0"?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<category name="com.liferay.portal.cluster">
<priority value="TRACE" />
</category>
<category name="com.liferay.portal.license">
<priority value="TRACE" />
</category>
We also found that the Lucene cluster replication startup errors were fixed in a fix pack and are getting a patch for it.
https://issues.liferay.com/browse/LPS-51714
https://issues.liferay.com/browse/LPS-51428
We added the following portal instance properties for lucene replication to work better between the 2 nodes:
portal.instance.http.port=port that the app servers listen on ex. 8080
portal.instance.protocol=http
Hope this helps someone.
Update
The lucene index load in a cluster issue was resolved by a Liferay 6.2 EE patch from support for the LPS's mentioned above.

Arquillian tomee remote

Using Arquillian 1.1.4.Final and Tomee 1.6.0.2
Took the tomee-plus-remote profile setup from the Tomee information about arqullian adapters and put it into the Maven pom.xml (with activeByDefault true).
Goal is to deploy a MQ JCA rar into the remote Tomee and configure a connection factory to MQ.
Set the arqullian.xml initially to:
<container qualifier="tomee" default="true">
<configuration>
<property name="httpPort">-1</property>
<property name="stopPort">-1</property>
</configuration>
</container>
Running via JUnit not sure why the webprofile is initialized and started rather than plus (when I have tomee plus specified in Maven):
Info: Succeeded in installing singleton service
jun 11, 2014 11:07:52 FM org.apache.openejb.config.ConfigurationFactory init
Info: openejb configuration file is 'C:\Users\MYG\AppData\Local\Temp\arquillian-apache-tomee\apache-tomee-webprofile-1.6.0.2\conf\tomee.xml'
Another thing is how to load a tomee.xml configuration. Thought, the "serverXml" in the arquillian.xml (set to src/test/resources/tomee.xml) would work but then everything inside that xml is not recognized as a valid rule. Can't add directives like Deployments as one does with resources. So how to configure the remote tomee from arquillian?
Yeah, tomee.xml was not really designed for arquillian.xml since all its config can be passed to properties attribute of tomee container using properties format
By adding a conf property to the arquillian.xml to for example src/test/conf where there is a tomee.xml file then it is loaded. This must be Tomee thing that I didn't know about until now.

Gettting an encryption error when attempting to embed a wlst script into a java class

I'm trying to put together a small utility that will let us pull managed server listen addresses and ports out of the managed servers in a domain.
WLST seemed like the right tool to use.
I've get a script that works something like this
admin_url = sys.argv[1]
cluster = sys.argv[2]
connect(url=admin_url)
servers = get_servers(cluster)
for server in servers.values():
address = server.getListenAddress()
port = str(server.getListenPort())
server_url = address + ":" + port
addresses.append(server_url)
print ','.join(addresses)
We're using weblogic keys to store the username and password, so no need to pass connect the username and password. It works fine, but...we need to use this in an ant script, and it looks like the only way to get info out of WLST and back into ant is via capturing the output.
The first problem I ran into is that WLST prints some garbage (a header) when you invoke it that you can't suppress. "Initializing WebLogic Scripting Tool (WLST) ...", etc.
So a little searching reveals there's no way to suppress that if you invoke WLST directly, but you can embed your script in a java class and the embedded interpreter won't output the header.
I wrapped my script in a class, compiled it and it runs no problem when I run it using java...
>java wlst.GetClusterAddress t3://myhost:7001 mycluster
mymanagedserver1:9999,mymananagedserver2:9999
So far so good.
Now I try to wrap that class in my ant script...
<java classname="wlst.GetClusterAddress" outputproperty="${addresses}" >
<arg line="${admin.url} ${cluster.name}"/>
<classpath refid="class.path"/>
</java>
Ant throws an exception when connecting to the admin server
[java] WLSTException: Error occured while performing connect : Error connecting to the server : weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.lang.SecurityException: The provider self-integrity check failed.
[java] Use dumpStack() to view the full stacktrace
[java]
I've checked my classpath, and all seems to be the same between java and ant. I'm not sure where to look next. Why doesn't this work when using ant?
Try it when you set fork="true" in the java task:
<java classname="wlst.GetClusterAddress" outputproperty="${addresses}" fork="true">
...

TortoiseSVN Can't Authenticate

After my previous problem, TortoiseSVN Can't Connect was resolved, I ran into a new problem.
On the linux server hosting my svn repository, in the repository's directory, there is a conf/svnserve.conf file. In this file, I have the option:
anon-access = none | read | write
Initially, this line was commented out and the default value must have been read.
Of course, I want to set anon-access = none, and I want auth-access = write (which is the default).
But when I set anon-access = none, when I try to browse with TortoiseSVN Repository Browser
using url svn://host:port/repositoryname, I get the error:
Unable to connect to a repository at URL
'svn://host:port/repositoryname' No access allowed to this repository
I'd like to successfully authenticate without ssh if possible, because I gather ssh has more moving parts and might be a little slower.
The server is CloudLinux Server release 5.8
The svn server information follows. I have only tried svn protocol so far.
svn, version 1.6.17 (r1128011) compiled Jul 26 2012, 03:59:19
Copyright (C) 2000-2009 CollabNet. Subversion is open source software,
see http://subversion.apache.org/ This product includes software
developed by CollabNet (http://www.Collab.Net/).
The following repository access (RA) modules are available:
ra_neon : Module for accessing a repository via WebDAV protocol using Neon.
handles 'http' scheme
ra_svn : Module for accessing a repository using the svn network protocol.
with Cyrus SASL authentication
handles 'svn' scheme
ra_local : Module for accessing a repository on local disk.
handles 'file' scheme
ra_serf : Module for accessing a repository via WebDAV protocol using serf.
handles 'http' scheme
handles 'https' scheme
I hope this is a good question because this is kind of the "out of the box" behavior connecting to svn with windows, which might be pretty common when someone adds svn to a shared hosting account.
Thank you!
Set these lines in your svnserve.conf file:
19 anon-access = none
20 auth-access = write
[...]
27 password-db = passwd
[...]
39 realm = Name-of-your-repository
46 force-username-case = lower
The line numbers are approximate.
The realm should equal the name of your repository. It can be anything. The password-db is who is authorized to use the repository. By default, the line is NOPed out.
Next, you'll edit the passwd file that's in the same directory. The format is very simple:
<userName> = <password>
There are two NOPed entries that show you how it's done.