codename one push notification meet error "javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure" - ssl

We have a web project which is always working fine, it just using codename one push api to push notification to our devices, but it suddenly get the following error:
javax.net.ssl.SSLHandshakeException: Received fatal alert:
handshake_failure
Below is the core code (same with codenamoe one demo)
HttpURLConnection connection = (HttpURLConnection)new URL("https://push.codenameone.com/push/push").openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("POST");
connection.setRequestProperty("Content-Type", "application/x-www-form-urlencoded;charset=UTF-8");
String cert = ITUNES_DEVELOPMENT_PUSH_CERT;
String pass = ITUNES_DEVELOPMENT_PUSH_CERT_PASSWORD;
if(ITUNES_PRODUCTION_PUSH) {
cert = ITUNES_PRODUCTION_PUSH_CERT;
pass = ITUNES_PRODUCTION_PUSH_CERT_PASSWORD;
}
String query = "token=" + PUSH_TOKEN +
"&device=" + URLEncoder.encode(deviceId1, "UTF-8") +
"&device=" + URLEncoder.encode(deviceId2, "UTF-8") +
"&device=" + URLEncoder.encode(deviceId3, "UTF-8") +
"&type=1" +
"&auth=" + URLEncoder.encode(FCM_SERVER_API_KEY, "UTF-8") +
"&certPassword=" + URLEncoder.encode(pass, "UTF-8") +
"&cert=" + URLEncoder.encode(cert, "UTF-8") +
"&body=" + URLEncoder.encode(MESSAGE_BODY, "UTF-8") +
"&production=" + ITUNES_PRODUCTION_PUSH +
"&sid=" + URLEncoder.encode(WNS_SID, "UTF-8") +
"&client_secret=" + URLEncoder.encode(WNS_CLIENT_SECRET, "UTF-8");
try (OutputStream output = connection.getOutputStream()) {
output.write(query.getBytes("UTF-8"));
}
int c = connection.getResponseCode();
// read response JSON
I directly run the code in unit test, it works well.
But when I call the function from project (such as a button from webpage), the error happened.
I tried several way to solve it but still can not work, please give me some suggestion to fix the issue. Thank you!

This generally happens if the certificate is invalid or out of date etc. It can happen if your connection. I just verified our SSL certificate on the push servers and it's valid (generated by cloudflare) so I suggest checking the routes to the server and your version of Java. You should have Java 8 or newer with a recent enough minor update version.

Related

jira api - how to update fields on a jira issue that is closed/published

the following code works fine on a jira issue that is in open. but when this is tried on a closed/published issue i get error. wanted to see if this is even possible to be done? manually on closed/published jira issue, we can update those fields
Client client = Client.create();
WebResource webResource = client.resource("https://jira.com/rest/api/latest/issue/JIRA_KEY1");
String data1 = "{\r\n" +
" \"fields\" : {\r\n" +
" \"customfield_10201\" : \"Value 1\"\r\n" +
" }\r\n" +
"}";
String auth = new String(Base64.encode("user" + ":" + "pass"));
ClientResponse response = webResource.header("Authorization", "Basic " + auth).type("application/json").accept("application/json").put(ClientResponse.class, data1);
Error received
Http Error : 400{"errorMessages":[],"errors":{"customfield_10201":"Field 'customfield_10201' cannot be set. It is not on the appropriate screen, or unknown."}}
There is probably a restriction that doesn't allow the value of that field to be changed once the issue is marked as complete.
Try opening that completed issue via the web interface and changing the field's value; if you can't do it via the web interface, then you can't do it the REST API.

PLC4X:Exception during scraping of Job

I'm actually developing a project that read data from 19 PLCs Siemens S1500 and 1 modicon. I have used the scraper tool following this tutorial:
PLC4x scraper tutorial
but when the scraper is working for a little amount of time I get the following exception:
I have changed the scheduled time between 1 to 100 and I always get the same exception when the scraper reach the same number of received messages.
I have tested if using PlcDriverManager instead of PooledPlcDriverManager could be a solution but the same problem persists.
In my pom.xml I use the following dependency:
<dependency>
<groupId>org.apache.plc4x</groupId>
<artifactId>plc4j-scraper</artifactId>
<version>0.7.0</version>
</dependency>
I have tried to change the version to an older one like 0.6.0 or 0.5.0 but the problem still persists.
If I use the modicon (Modbus TCP) I also get this exception after a little amount of time.
Anyone knows why is happening this error? Thanks in advance.
Edit: With the scraper version 0.8.0-SNAPSHOT I continue having this problem.
Edit2: This is my code, I think the problem can be that in my scraper I am opening a lot of connections and when it reaches 65526 messages it fails. But since all the processing is happenning inside the lambda function and I'm using a PooledPlcDriverManager, I think the scraper is using only one connection so I dont know where is the mistake.
try {
// Create a new PooledPlcDriverManager
PlcDriverManager S7_plcDriverManager = new PooledPlcDriverManager();
// Trigger Collector
TriggerCollector S7_triggerCollector = new TriggerCollectorImpl(S7_plcDriverManager);
// Messages counter
AtomicInteger messagesCounter = new AtomicInteger();
// Configure the scraper, by binding a Scraper Configuration, a ResultHandler and a TriggerCollector together
TriggeredScraperImpl S7_scraper = new TriggeredScraperImpl(S7_scraperConfig, (jobName, sourceName, results) -> {
LinkedList<Object> S7_results = new LinkedList<>();
messagesCounter.getAndIncrement();
S7_results.add(jobName);
S7_results.add(sourceName);
S7_results.add(results);
logger.info("Array: " + String.valueOf(S7_results));
logger.info("MESSAGE number: " + messagesCounter);
// Producer topics routing
String topic = "s7" + S7_results.get(1).toString().substring(S7_results.get(1).toString().indexOf("S7_SourcePLC") + 9 , S7_results.get(1).toString().length());
String key = parseKey_S7("s7");
String value = parseValue_S7(S7_results.getLast().toString(),S7_results.get(1).toString());
logger.info("------- PARSED VALUE -------------------------------- " + value);
// Create my own Kafka Producer
ProducerRecord<String, String> record = new ProducerRecord<String, String>(topic, key, value);
// Send Data to Kafka - asynchronous
producer.send(record, new Callback() {
public void onCompletion(RecordMetadata recordMetadata, Exception e) {
// executes every time a record is successfully sent or an exception is thrown
if (e == null) {
// the record was successfully sent
logger.info("Received new metadata. \n" +
"Topic:" + recordMetadata.topic() + "\n" +
"Partition: " + recordMetadata.partition() + "\n" +
"Offset: " + recordMetadata.offset() + "\n" +
"Timestamp: " + recordMetadata.timestamp());
} else {
logger.error("Error while producing", e);
}
}
});
}, S7_triggerCollector);
S7_scraper.start();
S7_triggerCollector.start();
} catch (ScraperException e) {
logger.error("Error starting the scraper (S7_scrapper)", e);
}
So in the end indeed it was the PLC that was simply hanging up the connection randomly. However the NiFi integration should have handled this situation more gracefully. I implemented a fix for this particular error ... could you please give version 0.8.0-SNAPSHOT a try (or use 0.8.0 if we happen to have released it already)

Incoming Express parameters are not the same as whats passed in

I have a strange issue where I have passed in parameters from a URL, into my Express server,
When I get the req.params.code & req.params.mode variables, they are different than what is passed in through the URL.
Allow me to show you...
Here is the Express code:
router.get('/verify/:user/:mode/:code', function(req,res){
console.log("STARTING VERIFICATION");
var code = req.params.code;
console.log('code: ' + code);
var user = req.params.user;
console.log('user: ' + user);
var mode = req.params.mode;
console.log('mode: ' + mode);
console.log('req.params: ' + JSON.stringify(req.params));
var regex = new RegExp(["^", req.params.user, "$"].join(""), "i");
console.log('REGEX: ' + regex);
var verified = false;
console.log('req.params: ' + req.params);
console.log('req.body: ' + req.body);
console.log("rx: "+ regex);
console.log('req.params.code: ' + req.params.code);
console.log('req.params.user: ' + req.params.user);
etc... etc... etc...
Here is the output in the console:
STARTING VERIFICATION
code: background-cycler.js
user: admin
mode: js
req.params: {"user":"admin","mode":"js","code":"background-cycler.js"}
REGEX: /^admin$/i
req.params: [object Object]
req.body: [object Object]
rx: /^admin$/i
req.params.code: background-cycler.js
req.params.user: admin
Here is the URL that is passed into the browser:
https://examplesite.com/verify/admin/sms/9484
I want to say that this code worked prior to dusting it off and moving an instance to google's cloud compute...
As you can see, the parameters passed in to the verify, code should be 9484 and mode should be sms. Instead i'm getting an unintended js filename, and a js mode instead.
UPDATE: As requested I added this within the Express route function:
console.log(req.originalUrl);
and I get this result:
/verify/admin/js/background-cycler.js
I can verify the URL that sent this was:
https://examplesite.com/verify/admin/sms/9484

PhantomJS Version 1.9.1 - Issues with Proxy Authentication

Can someone please help me out on this?
I have spent a considerable amount of time setting up PhantomJS to save JPGs of specific web-pages and it works/ed really well until I went to deploy it on a machine which accesses the net through a proxy.
Now, whatever I try, I can not get the authentication right?
Has anyone EVER managed to do this?
I am using command line arguments:
--proxy=xx.xx.xx.xx:8080
--proxy-type=http
--proxyAuth=myusername:mypassword
I have checked on the Proxy (TMG) which still insists that my username is anonymous rather than the one which I am sending through using the command line.
From the --debug, I am able to see that proxy, proxyType and proxyAuth have all been populated correctly so PhantomJS is understanding the command line, yet when it runs, it still returns 'Proxy requires authentication'
Where am I going wrong?
Thanks for reading this and, hopefully, helping me out
BTW - I am using Windows 7 - 64 bit
OK, so I've done a whole load of digging on this and have got it working. So I thought I would publish what I found in case it might help someone else.
One of the things that I found when I was searching around is that there was a bit of a discussion about the inclusion of the following in the headers which are submitted by the JS which is used to drive PhantomJS:
page.customHeaders={'Authorization': 'Basic '+btoa('username:password')};
rather than using
page.settings.userName = 'username';
page.settings.password = 'password';
which will not work. Please refer to Previous Discussion
This is fine if you are using basic levels of authentication on the proxy. It will not work if you are using Integrated Authetication as this will still require NTLM/Kerberos or whatever.
The way around this is to change the settings on the client.
You need to allow the client access to the outside world WITHOUT it routing through the proxy. Certainly in TMG, this is done by changing the settings which apply to the Client Network Software which is installed on the client hardware.
By allowing the PhantomJS Executable to bypass the proxy, you will overcome the problems which I and many others have experienced but you will still have a bit of an issue as you will have just broken your system security so be aware and hope that there is a new version PhantomJS which handles NTLM/Kerberos.
Alternatively, change your Proxy to use Basic Authentication which will allow the use to the customHeaders solution to work as above but this is potentially an even greater risk to you security than allowing the client to bypass the proxyy.
var page = require('webpage').create(),
system = require('system'),
fs = require('fs'),
fileName = 'phantomjs',
extension = 'log',
file = fs.open(fileName + '.' + extension, 'w'),
address,
output,
delay,
version = phantom.version.major + '.'
+ phantom.version.minor + '.'
+ phantom.version.patch ;
if (system.args.length === 1){
console.log('Usage: example.js <some URL> delay');
phantom.exit();
}
// Handle the command line arguments
address = system.args[1];
output = system.args[2];
delay = system.args[3];
// Write the Headers into the log file
file.writeLine("PhantomJS version: " + version);
file.writeLine("Opening page: " + address);
file.writeLine("Writing image to: " + output);
file.writeLine("Applying a delay of: " + delay + " milliseconds");
function quit(reason, value) {
console.log("Quit: " + reason);
file.writeLine("Quit: " + reason);
file.close();
if (value !== 1){
// If there has been an error reported, stick a datetime stamp on the log to retain it
var d = new Date();
var dateString = d.getFullYear().toString() +
((d.getMonth() + 1) <= 9 ? '0' : '') + (d.getMonth() + 1).toString() +
(d.getDate() <= 9 ? '0' : '') + d.getDate().toString() +
(d.getHours() <= 9 ? '0' : '') + d.getHours().toString() +
(d.getMinutes() <= 9 ? '0' : '') + d.getMinutes().toString() +
(d.getSeconds() <= 9 ? '0' : '') + d.getSeconds().toString();
fs.move(fileName + '.' + extension, fileName + '_' + dateString + '.' + extension);
}
phantom.exit(value);
}
page.onResourceError = function(resourceError) {
page.reason = resourceError.errorString;
page.reason_url = resourceError.url;
};
page.onError = function (msg, trace) {
console.log(msg);
file.writeLine(msg);
trace.forEach(function(item) {
console.log(' ', item.file, ':', item.line);
//file.writeLine(' ', item.file, ':', item.line);
})
quit("Failed", 0);
}
page.onResourceRequested = function (request) {
file.writeLine('Request: ' + JSON.stringify(request, undefined, 4));
};
page.onResourceReceived = function (response) {
file.writeLine('Receive: ' + JSON.stringify(response, undefined, 4));
};
// Set a user agent - if required
//page.settings.userAgent = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; .NET CLR 1.1.4322)';
// And open the page
page.open(address, function (status) {
if (status !== 'success') {
console.log('Unable to load the address: \"' + page.reason_url + '\": ' + page.reason);
file.writeLine('Unable to load the address: \"' + page.reason_url + '\": ' + page.reason);
quit("Failed", 0);
}
else {
window.setTimeout(function() {
console.log('Saving the page!');
file.writeLine('Saving the page!');
page.render(output);
quit("Finished", 1);
}, delay);
}
});

CryptoJS (HMAC Sha256) giving incorrect output?

Let me start by saying I'm no expert in cryptography algorithms...
I am trying to build a method which formats an HTTP header for Windows Azure - and this header requires part of its message to be encrypted via HMAC with SHA256 (and then also base64 encoded).
I chose to use CryptoJS because it's got an active user community.
First, my code:
_encodeAuthHeader : function (url, params, date) {
//http://msdn.microsoft.com/en-us/library/windowsazure/dd179428
var canonicalizedResource = '/' + this.getAccountName() + url;
/*
StringToSign = Date + "\n" + CanonicalizedResource
*/
var stringToSign = date + '\n' + canonicalizedResource;
console.log('stringToSign >> ' + stringToSign)
var encodedBits = CryptoJS.HmacSHA256(stringToSign, this.getAccessKey());
console.log('encodedBits >> ' + encodedBits);
var base64Bits = CryptoJS.enc.Base64.stringify(encodedBits);
console.log('base64Bits >> ' + base64Bits);
var signature = 'SharedKeyLite ' + this.getAccountName() + ':' + base64Bits;
console.log('signature >> ' + signature);
return signature;
},
The method successfully returns a "signature" with the appropriate piece encrypted/encoded. However, Azure complains that it's not formatted correctly.
Some example output:
stringToSign >> Mon, 29 Jul 2013 16:04:20 GMT\n/senchaazurestorage/Tables
encodedBits >> 6723ace2ec7b0348e1270ccbaab802bfa5c1bbdddd108aece88c739051a8a767
base64Bits >> ZyOs4ux7A0jhJwzLqrgCv6XBu93dEIrs6IxzkFGop2c=
signature >> SharedKeyLite senchaazurestorage:ZyOs4ux7A0jhJwzLqrgCv6XBu93dEIrs6IxzkFGop2c=
Doing some debugging, I am noticing that CryptoJS is not returning the same value (HMAC with SHA256) as alternative implementations. For example, the string "Mon, 29 Jul 2013 16:04:20 GMT\n/senchaazurestorage/Tables" appears as:
"6723ace2ec7b0348e1270ccbaab802bfa5c1bbdddd108aece88c739051a8a767" via CryptoJS
"faa89f45ef029c63d04b8522d07c54024ae711924822c402b2d387d05398fc9f" via PHP hash_hmac('sha256', ... )
Digging even deeper, I'm seeing that most HMAC/SHA265 algorithms return data which matches the output from PHP... am I missing something in CryptoJS? Or is there a legitimate difference?
As I mentioned in my first comment, the newline ("\n") was causing problems. Escaping that ("\ \n", without the space inbetween) seems to have fixed the inconsistency in HMAC/SHA256 output.
I'm still having problems with the Azure HTTP "Authorization" header, but that's another issue.