Can't connect to S3 in PrestoDB: Unable to load credentials from service endpoint - amazon-s3

I am connecting to S3 Buckets to Apache Hive so that I can query the Parquet files in S3 directly through PrestoDB. I am using HDP VM for PrestoDB by Teradata.
For this, I configured the hive-site.xml file and added my AWS Access Key and Secret Key in the /etc/hive/conf/hive-site.xml file like:
<property>
<name>hive.s3.aws-access-key</name>
<value>something</value>
</property>
<property>
<name>hive.s3.aws-secret-key</name>
<value>some-other-thing</value>
</property>
Now, my S3 Bucket URL path where Parquet files reside looks like:
https://s3.console.aws.amazon.com/s3/buckets/sb.mycompany.com/someFolder/anotherFolder/?region=us-east-2&tab=overview
While Create an external table, I gave the location for S3 in the query as:
CREATE TABLE hive.project.data (... schema ...)
WITH ( format = 'PARQUET',
external_location = 's3://sb.mycompany.com/someFolder/anotherFolder/?region=us-east-2&tab=overview')
The Apache Hive isn't able to connect to S3 Buckets and giving this error with --debug flag:
Query 20180316_112407_00005_aj9x6 failed: Unable to load credentials from service endpoint
========= TECHNICAL DETAILS =========
[ Error message ]
Unable to load credentials from service endpoint
[ Session information ]
ClientSession{server=http://localhost:8080, user=presto, clientInfo=null, catalog=null, schema=null, timeZone=Zulu, locale=en_US, properties={}, transactionId=null, debug=true, quiet=false}
[ Stack trace ]
com.amazonaws.AmazonClientException: Unable to load credentials from service endpoint
at com.amazonaws.auth.EC2CredentialsFetcher.handleError(EC2CredentialsFetcher.java:180)
at com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:159)
at com.amazonaws.auth.EC2CredentialsFetcher.getCredentials(EC2CredentialsFetcher.java:82)
at com.amazonaws.auth.InstanceProfileCredentialsProvider.getCredentials(InstanceProfileCredentialsProvider.java:104)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4016)
at com.amazonaws.services.s3.AmazonS3Client.getBucketRegionViaHeadRequest(AmazonS3Client.java:4478)
at com.amazonaws.services.s3.AmazonS3Client.fetchRegionFromCache(AmazonS3Client.java:4452)
at com.amazonaws.services.s3.AmazonS3Client.resolveServiceEndpoint(AmazonS3Client.java:4426)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1167)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1152)
at com.facebook.presto.hive.PrestoS3FileSystem.lambda$getS3ObjectMetadata$2(PrestoS3FileSystem.java:552)
at com.facebook.presto.hive.RetryDriver.run(RetryDriver.java:138)
at com.facebook.presto.hive.PrestoS3FileSystem.getS3ObjectMetadata(PrestoS3FileSystem.java:549)
at com.facebook.presto.hive.PrestoS3FileSystem.getFileStatus(PrestoS3FileSystem.java:305)
at org.apache.hadoop.fs.FileSystem.isDirectory(FileSystem.java:1439)
at com.facebook.presto.hive.HiveMetadata.getExternalPath(HiveMetadata.java:719)
at com.facebook.presto.hive.HiveMetadata.createTable(HiveMetadata.java:690)
at com.facebook.presto.spi.connector.classloader.ClassLoaderSafeConnectorMetadata.createTable(ClassLoaderSafeConnectorMetadata.java:218)
at com.facebook.presto.metadata.MetadataManager.createTable(MetadataManager.java:505)
at com.facebook.presto.execution.CreateTableTask.execute(CreateTableTask.java:148)
at com.facebook.presto.execution.CreateTableTask.execute(CreateTableTask.java:57)
at com.facebook.presto.execution.DataDefinitionExecution.start(DataDefinitionExecution.java:111)
at com.facebook.presto.execution.QueuedExecution.lambda$start$1(QueuedExecution.java:63)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.net.ConnectException: Network is unreachable
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at sun.net.NetworkClient.doConnect(NetworkClient.java:175)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:432)
at sun.net.www.http.HttpClient.openServer(HttpClient.java:527)
at sun.net.www.http.HttpClient.<init>(HttpClient.java:211)
at sun.net.www.http.HttpClient.New(HttpClient.java:308)
at sun.net.www.http.HttpClient.New(HttpClient.java:326)
at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:1169)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1105)
at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:999)
at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:933)
at com.amazonaws.internal.ConnectionUtils.connectToEndpoint(ConnectionUtils.java:47)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:106)
at com.amazonaws.internal.EC2CredentialsUtils.readResource(EC2CredentialsUtils.java:77)
at com.amazonaws.auth.InstanceProfileCredentialsProvider$InstanceMetadataCredentialsEndpointProvider.getCredentialsEndpoint(InstanceProfileCredentialsProvider.java:117)
at com.amazonaws.auth.EC2CredentialsFetcher.fetchCredentials(EC2CredentialsFetcher.java:121)
... 24 more
========= TECHNICAL DETAILS END =========
I even restarted my PrestDB server after I added the Keys. Next, I tried adding my properties to /home/presto/.prestoadmin/catalog/hive.properties:
connector.name=hive-hadoop2
hive.metastore.uri=thrift://localhost:9083
hive.allow-drop-table=true
hive.allow-rename-table=true
hive.time-zone=UTC
hive.metastore-cache-ttl=0s
hive.s3.use-instance-credentials=false
hive.s3.aws-access-key=something
hive.s3.aws-secret-key=some-other-thing
Again restarted the PrestoDB server but still the same issue.
I then modified the S3 Bucket location in query with bucket name only:
external_location = 's3://sb.mycompany.com'
And with s3a scheme as well:
external_location = 's3a://sb.mycompany.com'
But the same issue is still present. What am I doing wrong?

This is embarrassing. On the VM I was using, there were issues with the network adapter and so, the VM wasn't able to connect to the Internet. I corrected the adapter and it is working now.

Related

NiFi ListSFTP connector cannot connect to sftp server

I have problems getting ListSFTP processor to work.
First I have tested accessing the sftp server using the commandline on the NiFi server. This works but I have to provide the pass phrase of the private ssh key file.
Then I made sure that I configured the ListSFTP processor using the same settings as the sftp commandline:
Hostname: ip address of the server
Port: 22
Username: Username with access to the server and bound to the key
Private Key Path: set to the full path of the file containing
Private Key Phrase: password for the key file
Remote path: .
Strict Host Key Checking: false
Host Key File: File hat contains the output of ssh-keyscan for the
given server
With these settings I get an error in the logfile. Any idea why NiFi cannot access the key file and access the sftp server. Cheers
2020-03-19 15:43:19,736 ERROR [Timer-Driven Process Thread-2]
o.a.nifi.processors.standard.ListSFTP
ListSFTP[id=d343fa19-0170-1000-3352-999829a448cc] Failed to perform
listing on remote host due to java.io.IOException: Failed to obtain c
onnection to remote host due to com.jcraft.jsch.JSchException: invalid
privatekey: [B#6d9b662: {} java.io.IOException: Failed to obtain
connection to remote host due to com.jcraft.jsch.JSchException:
invalid privatekey: [B#6d9b662
at org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:515)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:212)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getListing(SFTPTransfer.java:175)
at org.apache.nifi.processors.standard.ListFileTransfer.performListing(ListFileTransfer.java:106)
at org.apache.nifi.processors.standard.ListSFTP.performListing(ListSFTP.java:146)
at org.apache.nifi.processor.util.list.AbstractListProcessor.listByTrackingTimestamps(AbstractListProcessor.java:472)
at org.apache.nifi.processor.util.list.AbstractListProcessor.onTrigger(AbstractListProcessor.java:414)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1162)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:209)
at org.apache.nifi.controller.scheduling.TimerDrivenSchedulingAgent$1.run(TimerDrivenSchedulingAgent.java:117)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by: com.jcraft.jsch.JSchException: invalid privatekey: [B#6d9b662
at com.jcraft.jsch.KeyPair.load(KeyPair.java:664)
at com.jcraft.jsch.KeyPair.load(KeyPair.java:561)
at com.jcraft.jsch.IdentityFile.newInstance(IdentityFile.java:40)
at com.jcraft.jsch.JSch.addIdentity(JSch.java:407)
at com.jcraft.jsch.JSch.addIdentity(JSch.java:388)
at org.apache.nifi.processors.standard.util.SFTPTransfer.getChannel(SFTPTransfer.java:485)
... 18 common frames omitted

kafka connect to s3 fails to start

I'm trying to configure the kafka-connect to send my data from kafka to s3.
I'm newbie in aspect of kafka, and I'm trying to implement this flow without any ssl encryptions just to get the hang of it.
kafka version : 2.12-2.2.0
kafka-connect : 4.1.1 (https://api.hub.confluent.io/api/plugins/confluentinc/kafka-connect-s3/versions/4.1.1/archive)
In the server.properties file the only change that I did is setting the advertised.listeners to my ec2 IP:
advertised.listeners=PLAINTEXT://ip:9092
kafka-connect properties :
# Kafka broker IP addresses to connect to
bootstrap.servers=localhost:9092
# Path to directory containing the connector jar and dependencies
plugin.path=/root/kafka_2.12-2.2.0/plugins/
# Converters to use to convert keys and values
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
# The internal converters Kafka Connect uses for storing offset and configuration data
internal.key.converter=org.apache.kafka.connect.json.JsonConverter
internal.value.converter=org.apache.kafka.connect.json.JsonConverter
internal.key.converter.schemas.enable=false
internal.value.converter.schemas.enable=false
offset.storage.file.filename=/tmp/connect.offsets
security.protocol=SASL_PLAINTEXT
consumer.security.protocol=SASL_PLAINTEXT
my s3-sink.properties file :
name=s3.sink
connector.class=io.confluent.connect.s3.S3SinkConnector
tasks.max=1
topics=my_topic
s3.region=us-east-1
s3.bucket.name=my_bucket
s3.part.size=5242880
flush.size=3
storage.class=io.confluent.connect.s3.storage.S3Storage
format.class=io.confluent.connect.s3.format.json.JsonFormat
schema.generator.class=io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator
partitioner.class=io.confluent.connect.storage.partitioner.DefaultPartitioner
schema.compatibility=NONE
I'm launcing kafka-connect with the following command :
connect-standalone.sh kafka-connect.properties s3-sink.properties
At first I got the following error :
Caused by: java.lang.IllegalArgumentException: Could not find a 'KafkaClient' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
From other posts I saw that I need to create a jaas config file so that what I did :
cat config/kafka_server_jass.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
};
and :
export KAFKA_OPTS="-Djava.security.auth.login.config=/root/kafka_2.12-2.2.0/config/kafka_server_jass.conf"
Now I'm getting the following error :
Caused by: org.apache.kafka.common.errors.SaslAuthenticationException: Failed to configure SaslClientAuthenticator
Caused by: org.apache.kafka.common.KafkaException: Principal could not be determined from Subject, this may be a transient failure due to Kerberos re-login
help :)
You might also need to define principal and keytab inside your jaas configuration:
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="userName"
serviceName="kafka"
password="password";
useKeyTab=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com#EXAMPLE.COM";
};

Apache Drill Impersonation

I'm trying to build in security on our Drill (1.6.0) system. I managed to get the security user authentication to work(JPam as explained in the documentation), but the impersonation does not seem to work. It seems to execute and fetch via the the admin user regardless of who has logged in via ODBC.
My drill-override.conf file is configured as follows:
drill.exec: {
cluster-id: "drillbits1",
zk.connect: "localhost:2181",
impersonation: {
enabled: true,
max_chained_user_hops: 3
},
security.user.auth {
enabled: true,
packages += "org.apache.drill.exec.rpc.user.security",
impl: "pam",
pam_profiles: [ "sudo", "login" ]
}
}
We are also only using Drill on one server, therefore I'm running drill-embedded to start things up. Troubleshooting:
root#srv001:/opt/apache-drill-1.6.0# bin/sqlline -u "jdbc:drill:schema=dfs;zk=localhost:2181;impersonation_target=dUser001" -n entryUser -p entryUserPassword
Error: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client. (state=,code=0)
java.sql.SQLException: Failure in connecting to Drill: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init> (DrillConnectionImpl.java:159)
at org.apache.drill.jdbc.impl.DrillJdbc41Factory.newDrillConnection(DrillJdbc41Factory.java:64)
at org.apache.drill.jdbc.impl.DrillFactory.newConnection(DrillFactory.java:69)
at net.hydromatic.avatica.UnregisteredDriver.connect(UnregisteredDriver.java:126)
at org.apache.drill.jdbc.Driver.connect(Driver.java:72)
at sqlline.DatabaseConnection.connect(DatabaseConnection.java:167)
at sqlline.DatabaseConnection.getConnection(DatabaseConnection.java:213)
at sqlline.Commands.connect(Commands.java:1083)
at sqlline.Commands.connect(Commands.java:1015)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at sqlline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:36)
at sqlline.SqlLine.dispatch(SqlLine.java:742)
at sqlline.SqlLine.initArgs(SqlLine.java:528)
at sqlline.SqlLine.begin(SqlLine.java:596)
at sqlline.SqlLine.start(SqlLine.java:375)
at sqlline.SqlLine.main(SqlLine.java:268)
Caused by: org.apache.drill.exec.rpc.RpcException: Failure setting up ZK for client.
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:200)
at org.apache.drill.jdbc.impl.DrillConnectionImpl.<init>(DrillConnectionImpl.java:151)
... 18 more
Caused by: java.io.IOException: Failure to connect to the zookeeper cluster service within the allotted time of 10000 milliseconds.
at org.apache.drill.exec.coord.zk.ZKClusterCoordinator.start(ZKClusterCoordinator.java:123)
at org.apache.drill.exec.client.DrillClient.connect(DrillClient.java:198)
... 19 more
Any ideas on this?
I have also looked at doing my own built in security, but I'm not able to retrieve the username from a SQL query. I have tried the following without any luck:
CURRENT_USER()
USER()
SESSION_USER()
Any ideas on this approach?
I suggest to create a different pam profile (say drill) rather than login and sudo.
Then create drill file under /etc/pam.d/ directory with the content:
#%PAM-1.0
auth include password-auth
account include password-auth
To get connections run:
select * from sys.connections;

How to connect spark application to secure HBase with Kerberos

I`m trying to connect a Spark application to HBase with Kerberos enabled. Spark version is 1.5.0, CDH 5.5.2 and it's executed in yarn cluster mode.
When HbaseContext is initialized, it throws this error:
ERROR ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
I have tried to do the authentication in the code, adding:
UserGroupInformation.setConfiguration(config)
UserGroupInformation.loginUserFromKeytab(principalName, keytabFilename)
I distribute the keytab file with --files option in spark-submit.
Now, the error is:
java.io.IOException: Login failure for usercomp#COMPANY.CORP from keytab krb5.usercomp.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
...
Caused by: javax.security.auth.login.LoginException: Unable to obtain password from user
at com.sun.security.auth.module.Krb5LoginModule.promptForPass(Krb5LoginModule.java:856)
Is this the way to connect to Kerberized HBase from a Spark app?
please see the example configuration like below if you are missing anything like hadoop.security.authentication
val conf= HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "list of ip's")
conf.set("hbase.zookeeper"+ ".property.clientPort","2181");
conf.set("hbase.master", "masterIP:60000");
conf.set("hadoop.security.authentication", "kerberos");
Actually try to put your hbase-site.xml directly in the SPARK_CONF directory of your edge node (should be something like /etc/spark/conf or /etc/spark2/conf).
you can use loginUserFromKeytabAndReturnUGI, and uig.doAs
or you could put you hbase classpath to SPARK_DIST_CLASSPATH.

Step ‘Run Tests on AWS Device Farm’ aborted due to exception: javax.net.ssl.SSLHandshakeException: Could not generate secret

I am trying to integrate the Jenkins with AWS device Farm to automate the Mobile device testing.
So, i had created an IAM user and attached the devicefarm policy to teh user.
Generated AWS Access Key ID & AWS Secret Key ID and provied in the jenkins manage configurations.
But the Post-build Actions to Run Tests on AWS Device Farm leads to below exception. Could anyone be able to help me?
Exception stackstrace is as below:
[AWSDeviceFarm] Using Project 'BMS_OPDVO'
[AWSDeviceFarm] Using DevicePool 'LG Nexus5'
[AWSDeviceFarm] Using App '**/target/resources/org.wordpress.android.5.0.apk'
[AWSDeviceFarm] Archiving artifact 'org.wordpress.android.5.0.apk'
[AWSDeviceFarm] Uploading org.wordpress.android.5.0.apk to S3
ERROR: Step ‘Run Tests on AWS Device Farm’ aborted due to exception:
javax.net.ssl.SSLHandshakeException: Could not generate secret
at sun.security.ssl.ECDHCrypt.getAgreedSecret(ECDHCrypt.java:103)
at sun.security.ssl.ClientHandshaker.serverHelloDone(ClientHandshaker.java:1067)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:348)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:979)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:914)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1062)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1375)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1403)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1387)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.createLayeredSocket(SSLConnectionSocketFactory.java:290)
at org.apache.http.conn.ssl.SSLConnectionSocketFactory.connectSocket(SSLConnectionSocketFactory.java:259)
at org.apache.http.impl.conn.HttpClientConnectionOperator.connect(HttpClientConnectionOperator.java:125)
at org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:319)
at org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:363)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:219)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:195)
at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:86)
at org.apache.http.impl.execchain.RedirectExec.execute(RedirectExec.java:108)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:184)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:359)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:330)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.upload(AWSDeviceFarm.java:317)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarm.uploadApp(AWSDeviceFarm.java:211)
at org.jenkinsci.plugins.awsdevicefarm.AWSDeviceFarmRecorder.perform(AWSDeviceFarmRecorder.java:287)
at hudson.tasks.BuildStepMonitor$3.perform(BuildStepMonitor.java:45)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:785)
at hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:726)
at hudson.maven.MavenModuleSetBuild$MavenModuleSetBuildExecution.post2(MavenModuleSetBuild.java:1037)
at hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:671)
at hudson.model.Run.execute(Run.java:1766)
at hudson.maven.MavenModuleSetBuild.run(MavenModuleSetBuild.java:529)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:408)
Caused by: java.security.InvalidKeyException: ECDH key agreement requires ECPublicKey for doPhase
at org.bouncycastle.jcajce.provider.asymmetric.ec.KeyAgreementSpi.engineDoPhase(Unknown Source)
at javax.crypto.KeyAgreement.doPhase(KeyAgreement.java:567)
at sun.security.ssl.ECDHCrypt.getAgreedSecret(ECDHCrypt.java:100)
... 34 more
The exception is basically because of the ssl handshaking between the jenkins and s3 server is not happening.
I had installed skip certificate check plugin and restarted the jenkins server which ignored the ssl handshaking error and proceeded with the operation.