Unable to send test reports via email using maven plugin - selenium

I unable to send test reports via email using maven plugin.
Error: 1. [ERROR] Sending the email to the following server failed : smtp.gmail.com:465: AuthenticationFailedException -> [Help 1]
Sending the email to the following server failed : smtp.gmail.com:995: Could not connect to SMTP host: smtp.gmail.com, port: 995, response: -1 -> [Help 1]
I tried with port 465, 587 as well but nothing would work. Any suggestion would really help as well as not getting any solution over the internet yet
**Code:**
<plugin>
<groupId>ch.fortysix</groupId>
<artifactId>maven-postman-plugin</artifactId>
<executions>
<execution>
<id>send a mail</id>
<phase>test</phase>
<goals>
<goal>send-mail</goal>
</goals>
<inherited>true</inherited>
<configuration>
<!-- From Email address -->
<from>test#totalitycorp.com</from>
<!-- Email subject -->
<subject>Yovo Test Automation Report</subject>
<!-- Fail the build if the mail doesnt reach -->
<failonerror>false</failonerror>
<!-- host -->
<mailhost>smtp.gmail.com</mailhost>
<!-- port of the host -->
<mailport>995</mailport>
<mailssl>true</mailssl>
<mailAltConfig>true</mailAltConfig>
<!-- Email Authentication(USername and Password) -->
<mailuser>test#gmail.com</mailuser>
<mailpassword>234aASD</mailpassword>
<receivers>
<!-- To Email address -->
<receiver>abhi.c74#gmail.com</receiver>
</receivers>
<fileSets>
<fileSet>
<!-- Report directory Path -->
<directory>/home/maverick/eclipse-workspace/YovoAndroidAutomation/test-output</directory>
<includes>
<!-- Report file name -->
<include>emailable-report.html</include>
</includes>
<!-- Use Regular Expressions like **/*.html if you want all the
html files to send -->
</fileSet>
</fileSets>
</configuration>
</execution>
</executions>
</plugin>

[ERROR] Sending the email to the following server failed : smtp.gmail.com:465: Authentication Required -> [Help 1]
This is very interesting issue. If we have to login in to Gmail from an external source—other than from the Gmail login—we need to enable the less secure option (i.e., set it to “ON”).
Here is how we do this:
Go to the “Less secure apps” section in My Account.
Next to “Access for less secure apps,” select "Turn on".
Note to Google Apps users: This setting is hidden if your administrator has locked less secure app account access.

Related

Unimrcp server sending 100 and 200 to a wrong port

<?xml version="1.0" encoding="UTF-8"?>
<!-- UniMRCP server document -->
<unimrcpserver xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="unimrcpserver.xsd" version="1.0">
<properties>
<ip type="auto"/>
</properties>
<components>
<!-- Factory of MRCP resources -->
<resource-factory>
<resource id="speechsynth" enable="true"/>
<resource id="speechrecog" enable="true"/>
<resource id="recorder" enable="true"/>
<resource id="speakverify" enable="true"/>
</resource-factory>
<!-- SofiaSIP MRCPv2 signaling agent -->
<sip-uas id="SIP-Agent-1" type="SofiaSIP">
<sip-port>8060</sip-port>
<sip-transport>udp</sip-transport>
<ua-name>UniMRCP SofiaSIP</ua-name>
<sdp-origin>UniMRCPServer</sdp-origin>
</sip-uas>
<!-- UniRTSP MRCPv1 signaling agent -->
<rtsp-uas id="RTSP-Agent-1" type="UniRTSP">
<rtsp-port>1554</rtsp-port>
<!-- <force-destination>true</force-destination> -->
<resource-map>
<param name="speechsynth" value="speechsynthesizer"/>
<param name="speechrecog" value="speechrecognizer"/>
</resource-map>
<max-connection-count>100</max-connection-count>
<sdp-origin>UniMRCPServer</sdp-origin>
</rtsp-uas>
<!-- MRCPv2 connection agent -->
<mrcpv2-uas id="MRCPv2-Agent-1">
<mrcp-port>1554</mrcp-port>
<max-connection-count>100</max-connection-count>
<force-new-connection>false</force-new-connection>
<rx-buffer-size>1024</rx-buffer-size>
<tx-buffer-size>1024</tx-buffer-size>
</mrcpv2-uas>
<!-- Media processing engine -->
<media-engine id="Media-Engine-1">
<realtime-rate>1</realtime-rate>
</media-engine>
<!-- Factory of RTP terminations -->
<rtp-factory id="RTP-Factory-1">
<rtp-port-min>5000</rtp-port-min>
<rtp-port-max>6000</rtp-port-max>
</rtp-factory>
<!-- Factory of plugins (MRCP engines) -->
<plugin-factory>
<engine id="Demo-Synth-1" name="demosynth" enable="true"/>
<engine id="Demo-Recog-1" name="demorecog" enable="true"/>
<engine id="Demo-Verifier-1" name="demoverifier" enable="true"/>
<engine id="Recorder-1" name="mrcprecorder" enable="true"/>
</plugin-factory>
</components>
<settings>
<!-- RTP/RTCP settings -->
<rtp-settings id="RTP-Settings-1">
<jitter-buffer>
<adaptive>1</adaptive>
<playout-delay>50</playout-delay>
<max-playout-delay>600</max-playout-delay>
<time-skew-detection>1</time-skew-detection>
</jitter-buffer>
<ptime>20</ptime>
<codecs own-preference="false">PCMU 8000</codecs>
<!-- enable/disable RTCP support -->
<rtcp enable="false">
<rtcp-bye>1</rtcp-bye>
<!-- rtcp transmission interval in msec (set 0 to disable) -->
<tx-interval>5000</tx-interval>
<!-- period (timeout) to check for new rtcp messages in msec (set 0 to disable) -->
<rx-resolution>1000</rx-resolution>
</rtcp>
</rtp-settings>
</settings>
<profiles>
<!-- MRCPv2 default profile -->
<mrcpv2-profile id="uni2">
<sip-uas>SIP-Agent-1</sip-uas>
<mrcpv2-uas>MRCPv2-Agent-1</mrcpv2-uas>
<media-engine>Media-Engine-1</media-engine>
<rtp-factory>RTP-Factory-1</rtp-factory>
<rtp-settings>RTP-Settings-1</rtp-settings>
</mrcpv2-profile>
<!-- MRCPv1 default profile -->
<mrcpv1-profile id="uni1">
<rtsp-uas>RTSP-Agent-1</rtsp-uas>
<media-engine>Media-Engine-1</media-engine>
<rtp-factory>RTP-Factory-1</rtp-factory>
<rtp-settings>RTP-Settings-1</rtp-settings>
</mrcpv1-profile>
<!-- more profiles might be added here -->
</profiles>
</unimrcpserver>
Hello,
I'm trying to connect a VBVoice application to a Unimrcp server for TTS. Application sends Invite successfully to a server, and then server replies with 100 and 200, however they all go to a wrong port (5060 instead of 8060). Am I missing anything in the config file?
VBVoice can be configured to change the port used for the MRCP connection, as it applies to the VBVMRCPClient. To modify the port used by the VBVoice MRCP Client open the Pronexus Control Panel, then access the VBVConfig utility. Along the left of VBVConfig access the MRCP section. Here you will see options for ASRServerPort and TTSServerPort. The default port is 5060. You can set this to any port number that is available. After making the change needed, use the File dropdown to select "Save All Keys" and close VBVConfig. The config change will be applied the next time you start the VBVoice MRCP Client.
Of note - Changing the MRCP port number is typically required when the VBVoice IVR is running VOIP telephony protocol because the default VOIP port VBVoice uses is 5060.
Also of note - Verify if the Speech Server running your ASR/TTS system will use TCP or UDP for the MRCP connection. By default VBVoice is configured to use TCP. This can be modified in the VBVConfig utility in the MRCP section, just look for the ASRServerPortIsTCP and the TTSServerPortIsTCP options.

wso2 BAM: Authentication failed! admin-bam

When I change admin-bam password from Web Console ( the default password for admin-bam is "admin"),
Home > Configure > Users and Roles --> Change my Password.
Then I recive the following log errors:
TID: [0] [BAM] [2015-11-04 08:36:07,718] INFO
{org.wso2.carbon.databridge.core.DataBridge} - user admin-bam connected {org.wso2.carbon.databridge.core.DataBridge}
TID: [0] [BAM] [2015-11-04 08:36:07,733] ERROR
{org.wso2.carbon.databridge.core.internal.authentication.Authenticator} - Authentication failed! admin-bam. This user is not permitted to publish events. {org.wso2.carbon.databridge.core.internal.authentication.Authenticator}
I also change my usr-mgt.xml and restart BAM, but I receive the same error:
<Realm>
<Configuration>
<AddAdmin>true</AddAdmin>
<AdminRole>admin-bam</AdminRole>
<AdminUser>
<UserName>admin-bam</UserName>
<Password>NEW_PASSWORD_HERE</Password>
</AdminUser>
<EveryOneRoleName>everyone</EveryOneRoleName> <!-- By default users in this role sees the registry root -->
Property name="dataSource">jdbc/USER_LST</Property>
</Configuration>
-.........
If I set again admin-bam/admin, those logs disappear.
Where is the error?
You have to use in the following way
Configure > Users and Roles > Users > Change Password
You need to change the BAMUsername and BAMPassword to match the new username and passowrd which you defined. Configuration is given below,
<APIUsageTracking>
<!-- Enable/Disable the API usage tracker. -->
<Enabled>true</Enabled>
<PublisherClass>org.wso2.carbon.apimgt.usage.publisher.APIMgtUsageDataBridgeDataPublisher</PublisherClass>
<ThriftPort>7614</ThriftPort>
<BAMServerURL>tcp://<BAM host IP>:7614/</BAMServerURL>
<BAMUsername>admin</BAMUsername>
<BAMPassword>admin</BAMPassword>
<!-- JNDI name of the data source to be used for getting BAM statistics. This data source should
be defined in the master-datasources.xml file in conf/datasources directory. -->
<DataSourceName>jdbc/WSO2AM_STATS_DB</DataSourceName>
</APIUsageTracking>
You can find this file in following location:
API Manager/repository/conf/api-manager.xml
If you are in a clustered environment, it is sufficient to only change the above settings in the Gateway node.

HBase master won't start, Can't connect to hbase.rootdir

I'm trying to run HBase in a pseudo-distributed mode based on the setup on the apache website, but I'm having trouble configuring the hbase.root directory correctly.
This is how my configuration files look like:
In Hadoop directory:
conf/core-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
conf/hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
</property>
</configuration>
conf/mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
In my HBase directory
hbase-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**
* Copyright 2010 The Apache Software Foundation
*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
-->
<configuration>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
</configuration>
When I run the start-hbase.sh script it says that it starts zookeeper, hbase master and region server and i'm able to log in to them. I can then access the hbase shell, but I can't create tables or anything. I tried to connect to the master-status ui using my web browser, but it wouldn't connect. At first I thought it was because I was running it on an amazon instance and that port 9000 wasn't granted permission, but I found it was. Ports 50030, and 50070 are granted the same permissions ans I'm able to access the job tracker and namenode from them. I checked the logs and found this error:
2013-08-05 18:00:35,613 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 9 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
2013-08-05 18:00:35,616 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
at org.apache.hadoop.ipc.Client.wrapException(Client.java:1136)
at org.apache.hadoop.ipc.Client.call(Client.java:1112)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
at com.sun.proxy.$Proxy10.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:411)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:135)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:276)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:241)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1411)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1429)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:667)
at org.apache.hadoop.hbase.master.MasterFileSystem.<init>(MasterFileSystem.java:112)
at org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:560)
at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:419)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:708)
at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:453)
at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:579)
at org.apache.hadoop.ipc.Client$Connection.access$2100(Client.java:202)
at org.apache.hadoop.ipc.Client.getConnection(Client.java:1243)
at org.apache.hadoop.ipc.Client.call(Client.java:1087)
... 17 more
As you can see it's trying to access localhost/127.0.0.1:9000, which is obviously wrong.:
Call to localhost/127.0.0.1:9000 failed on connection exception
This is what my /etc/hosts file looks like:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
Also:
Replacing localhost with the public DNS of the instance doesn't work either
Few suggestions first. You don't actually need to put dfs.replication, mapred.job.tracker in core-site.xml and dfs.support.append in hbase-site.xml files. It's not required.
Please make sure NN is running fine and is out of safemode. Also, it's better to turn IPv6 off and add hbase.zookeeper.property.dataDir and hbase.zookeeper.property.clientPort in hbase-site.xml file and set export HBASE_MANAGES_ZK in hbase-env.sh to true. Restart HBase after changing config files.

jboss-as-maven-plugin deploy over https

I have a working remote deploy to JBOSS AS 7.1. However, I want to send these deploys using ssl. When I add server identity tag identifying the ssl information, my jboss instance will not receive the deploy.
<server-identities>
<ssl>
<keystore path="xxx/yyy/zzz.jks" password="myFakePassword"/>
</ssl>
</server-identities>
Removing the above will allow me to deploy remotely, but it will not use ssl (my problem).
The above identity is required for access to the administrative console, so I know that it works.
Here is my configuration of the plugin:
<plugin>
<groupId>org.jboss.as.plugins</groupId>
<artifactId>jboss-as-maven-plugin</artifactId>
<version>7.3.Final</version>
<configuration>
<force>true</force>
<hostname>domain.com</hostname>
<port>9119</port> <!-- not the real port -->
<username>myFakeUsername</username>
<password>myFakePassword</password>
<filename>deployable.war</filename>
</configuration>
<executions>
<execution>
<phase>install</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
The error I get from the deploying client is:
[ERROR] }'. java.net.ConnectException: JBAS012174: Could not connect to remote://domain.com:9119. The connection failed: General SSLEngine problem: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
and the error I get on the server log is:
ERROR [org.jboss.remoting.remote.connection] (Remoting "domain.com:MANAGEMENT" read-1) JBREM000200: Remote connection failed: javax.net.ssl.SSLException: Received fatal alert: certificate_unknown
Other relevant information:
The certificate is self signed. The certificate works for https requests to the admin console and to the web applications hosted on jboss. The certificate works from the above mentioned identity when accessing the admin console through a browser.
Any help would be GREATLY appreciated.
Thank you in advance.
I had this exact issue and resolved it by creating a truststore, adding my certificate to it, and then passing that truststore as a JVM argument when calling the Maven deploy command.
Create the truststore:
keytool -import -file path/to/your/certificate.cert -alias -keystore path/to/myTrustStore
Issue your command with that truststore as a JVM argument:
mvn jboss-as:deploy -Djavax.net.ssl.trustStore=path/to/myTrustStore

Can Maven Wagon plugin use a private key for scp?

Can Maven Wagon plugin be configured to use a private key for ssh/scp? Everything I've tried still leaves maven to ask me for a password when it gets to the point of scp-ing.
You should be able to specify the path to the private key in the server element in your settings.xml:
The repositories for download and
deployment are defined by the
repositories and
distributionManagement elements of
the POM. However, certain settings
such as username and password should
not be distributed along with the
pom.xml. This type of information
should exist on the build server in
the settings.xml.
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
http://maven.apache.org/xsd/settings-1.0.0.xsd">
...
<servers>
<server>
<id>server001</id>
<username>my_login</username>
<password>my_password</password>
<privateKey>${user.home}/.ssh/id_dsa</privateKey>
<passphrase>some_passphrase</passphrase>
<filePermissions>664</filePermissions>
<directoryPermissions>775</directoryPermissions>
<configuration></configuration>
</server>
</servers>
...
</settings>
id: This is the ID of the
server (not of the user to login as)
that matches the id element of the
repository/mirror that Maven tries to
connect to.
username, password: These elements appear as a pair denoting the login and password
required to authenticate to this
server.
privateKey,
passphrase: Like the previous two elements, this pair specifies a path
to a private key (default is
${user.home}/.ssh/id_dsa) and a
passphrase, if required. The
passphrase and password elements may
be externalized in the future, but for
now they must be set plain-text in the
settings.xml file.
filePermissions, directoryPermissions: When a repository file or directory is
created on deployment, these are the
permissions to use. The legal values
of each is a three digit number
corresponding to *nix file
permissions, ie. 664, or 775.
Note: If you use a private key to
login to the server, make sure you
omit the <password> element.
Otherwise, the key will be ignored.
Password Encryption
A new feature - server password and
passphrase encryption has been added
to 2.1.x and 3.0 trunks. See details
on this page.
Pay a special attention to the "note": If you use a private key to login to the server, make sure you omit the <password> element. Otherwise, the key will be ignored. So the final configuration will be close to:
<settings>
...
<servers>
<server>
<id>ssh-repository</id>
<username>your username in the remote system</username>
<privateKey>/path/to/your/private/key</privateKey>
<passphrase>sUp3rStr0ngP4s5wOrD</passphrase><!-- if required -->
<configuration>
...
</configuration>
</server>
</servers>
...
</settings>
I know this is an old thread, but it looks like the Wagon plugin is reading settings.xml (e.g. username) but not using all of the settings. I could not get it to stop asking for Kerberos username/password during scp. (Looks like there might have been changes to plugin late 2016 that affect this.)
Just adding this answer in case it helps someone else.
For me, the solution was even simpler: totally skip using 'settings.xml'
and simply specify 'scpexe' instead of 'scp' for protocol (like under distributionManagement section of pom.xml). This then uses your machine's default SSH configuration (unix settings under ~/.ssh).
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>wagon-maven-plugin</artifactId>
<version>1.0</version>
<executions>
<execution>
<id>upload-to-server</id>
<phase>deploy</phase>
<goals><goal>upload-single</goal></goals>
<configuration>
<fromFile>file-to-upload</fromfile>
<url>scpexe://username#serverName/dirname-to-copy-to
<toFile>file-to-upload</toFile>
</configuration>
</execution>
</executions>
</plugin>
I wanted to do the exact same thing today in conjunction with the maven-site-plugin (3.9.1) and was also hitting some roadblocks (specifically, the wagon-ssh plugin insisted on asking me for my Kerberos username and password).
What finally worked for me with wagon-ssh-3.4.3:
<!-- add scp support for mvn site:deploy -->
<dependency>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh</artifactId>
<version>3.4.3</version>
</dependency>
and in settings.xml:
<server>
<id>ssh-repository</id>
<username>pridkdev</username>
<privateKey>${user.home}/.ssh/pridkdev.ppk</privateKey>
<filePermissions>664</filePermissions>
<directoryPermissions>775</directoryPermissions>
<configuration>
<interactive>false</interactive>
<strictHostKeyChecking>no</strictHostKeyChecking>
<preferredAuthentications>publickey</preferredAuthentications>
</configuration>
</server>
I guess what was crucial is the <configuration> block and there especially the <preferredAuthentications> setting.
I found the necessary info here:
http://maven.apache.org/plugins/maven-deploy-plugin/examples/deploy-ssh-external.html