Modular Java 13 / JavaFx WebWiew fails to display when jlinked - ssl

I have a problem with displaying a webpage in an embedded window but only when creating a standalone jlinked package and only for certain https sites.
I followed the instructions at https://openjfx.io/openjfx-docs/#install-javafx for creating a simple modular App and this works fine when run from the command line with
java --module-path "%PATH_TO_FX%;mods" -m uk.co.comsci.testproj/uk.co.comsci.testproj.Launcher
but after jlinking with the command
jlink --module-path "%PATH_TO_FX_MODS%;mods" --add-modules uk.co.comsci.testproj --output launch
and running with
launch\bin\java.exe -m uk.co.comsci.testproj/uk.co.comsci.testproj.Launcher
the javaFx scene opens but just a blank screen... and I have to use task manager to terminate the App.
If I change the URL to other https sites, it displays fine.
I guess it is down to the security settings and policies somewhere but I have no idea where to start.
I have tried monitoring with WireShark and this shows that when run from java and it works it does some TLSv1.3 stuff to establish the connection. When run as a jlinked package it only does TLSv1.2 stuff. Maybe a clue?
Here's my SSCE:
module-info.java
module uk.co.comsci.testproj {
requires javafx.web;
requires javafx.controls;
requires javafx.media;
requires javafx.graphics;
requires javafx.base;
exports uk.co.comsci.testproj;
}
Launcher.java
package uk.co.comsci.testproj;
public class Launcher {
public static void main(String[] args) {
try {
MainApp.main(args);
} catch (Exception ex) {
System.err.println("Exception!!! " + ex);
}
}
}
MainApp.java
package uk.co.comsci.testproj;
import javafx.application.Application;
import javafx.geometry.Pos;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.layout.HBox;
import javafx.scene.layout.Priority;
import javafx.scene.layout.VBox;
import javafx.scene.web.WebEngine;
import javafx.scene.web.WebView;
import javafx.stage.Modality;
import javafx.stage.Stage;
import javafx.stage.StageStyle;
public class MainApp extends Application {
private Stage mainStage;
public static void main(String[] args) throws Exception {
launch(args);
}
#Override
public void start(final Stage initStage) throws Exception {
mainStage = new Stage(StageStyle.DECORATED);
mainStage.setTitle("Test Project");
WebView browser = new WebView();
WebEngine webEngine = browser.getEngine();
// webEngine.load("https://app.comsci.co.uk"); // url);
String uri = "https://test-api.service.hmrc.gov.uk/oauth/authorize"
+ "?response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A8084%2Fredirect"
+ "&state=lFuLG42uri_aAQ_bDBa9TZGGYD0BDKtFRv8xEaKbeQo"
+ "&client_id=tASN6IpBPt5OcIHlWzkaLXTAyMEa&scope=read%3Avat+write%3Avat";
webEngine.load(uri);
Button closeButt = new Button("Cancel");
closeButt.setOnMouseClicked(event -> {
mainStage.close();
});
HBox closeButBar = new HBox(closeButt);
closeButBar.setAlignment(Pos.BASELINE_RIGHT);
VBox vlo = new VBox(browser, closeButBar);
vlo.setFillWidth(true);
vlo.setSpacing(10.0);
VBox.setVgrow(browser, Priority.ALWAYS);
Scene scene2 = new Scene(vlo, 800, 800);
mainStage.setScene(scene2);
mainStage.initModality(Modality.APPLICATION_MODAL);
mainStage.setTitle("Test connection");
mainStage.showAndWait();
}
}
Any help much appreciated.

OK. Finally tracked it down. So in case anyone has the same problem:
Nothing to do with JavaFx or Webview it was the TLS handshake failing.
Replacing the webview with an http client get
String uri = "https://test-api.service.hmrc.gov.uk/oauth/authorize"
+ "?response_type=code&redirect_uri=http%3A%2F%2Flocalhost%3A8084%2Fredirect"
+ "&state=lFuLG42uri_aAQ_bDBa9TZGGYD0BDKtFRv8xEaKbeQo"
+ "&client_id=tASN6IpBPt5OcIHlWzkaLXTAyMEa&scope=read%3Avat+write%3Avat";
var client = HttpClient.newHttpClient();
var request = HttpRequest.newBuilder()
.GET()
.uri(URI.create(uri))
.timeout(Duration.ofSeconds(15))
.build();
try {
HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
System.out.println("REsponse " + response.body());
} catch (IOException | InterruptedException e) {
e.printStackTrace();
}
and running with '-Djavax.net.debug=ssl:handshake:verbose' showed that the handshake was failing.
running the embedded keytool -showinfo -tls and comparing this with the system keytool output showed that the TLS_ECDHE_... cyphers where not supported in the jlinked output
A bit of googling and help from here https://www.gubatron.com/blog/2019/04/25/solving-received-fatal-alert-handshake_failure-error-when-performing-https-connections-on-a-custom-made-jre-with-jlink/ showed that all I needed to do was add
requires jdk.crypto.cryptoki;
to my module-info.java :-)

You need to just add the following line in module-info.java
requires jdk.crypto.cryptoki;

Related

Aws-java-sdk from xagent

I'm developing an application in which much of the work interacts with aws S3.
Initial situation:
Domino: Release 9.0.1FP6.
Application on xpages with aws utilities working perfectly with the typical functionalities of readBucket, downloadFile, createBucket etc.
For application needs, due to its weight, I need to separate the logic of the same and try three methods for their separation.
In another database, an agent receives a docID from the main application and executes the order of the requested operations for S3. The mechanism works perfectly, but the memory consumption is unacceptable so it is discarded.
In another new database with the same libraries and classes needed to focus with XAgent based on How to schedule an Xagent from a Domino Java agent? Agent but with the access not ssl that points Per Henrik Lausten. It works fine, but if we load s3 it gives errors.
Console Java:
Starting http://localhost/proves\s3.nsf/demo.xsp
java.lang.NullPointerException --> at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:727)
Console Domino
HTTP JVM: demo.xsp --> beforePageLoad ---> Hello Word
HTTP JVM: CLFAD0211E: Exception thrown. please consult error-log-0.xml
Error-log-0.xml
Exception occurred servicing request for: /proves/s3.nsf/demo.xsp - HTTP Code: 500
IBM_TECHNICAL_SUPPORT\ xpages_exc.log
java.lang.NoClassDefFoundError: com.amazonaws.auth.AWSCredentials
I think the problem may be in using this mechanism because it is not secure, if it is accessed from the browser to demo.xsp it will be running the entire load of aws xon the default credentials.
I test with another SSL-based xagent according to Devin Olson's blog post, Scheduled Xagents, but throw error:
Console Java:
Exception:javax.net.ssl.SSLHandshakeException: com.ibm.jsse2.util.j: No trusted certificate found
Is the separation approach of the logic of the application correct?
Any suggestions as to why the third procedure for SSL is failing?
Thanks in advance
Edit: Hello, the code XAgent (Agent properties security tab=3)
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.Socket;
import javax.net.ssl.SSLSocketFactory;
import lotus.domino.AgentBase;
public class JavaAgent extends AgentBase {
// Change these settings below to your setup as required.
static final String hostName = "localhost";
static final String urlFilepath = "/proves/s3.nsf/demo.xsp";
static final int sslPort = 443;
public void NotesMain() {
try {
final SSLSocketFactory factory = (SSLSocketFactory) SSLSocketFactory.getDefault();
final Socket socket = factory.createSocket(JavaAgent.hostName, JavaAgent.sslPort);
final BufferedWriter out = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream()));
final BufferedReader in = new BufferedReader(new InputStreamReader(socket.getInputStream()));
final StringBuilder sb = new StringBuilder();
sb.append("GET ");
sb.append(JavaAgent.urlFilepath);
sb.append(" HTTP/1.1\n");
final String command = sb.toString();
sb.setLength(0);
sb.append("Host: ");
sb.append(JavaAgent.hostName);
sb.append("\n\n");
final String hostinfo = sb.toString();
out.write(command);
out.write(hostinfo);
out.flush();
in.close();
out.close();
socket.close();
} catch (final Exception e) {
// YOUR_EXCEPTION_HANDLING_CODE
System.out.println("Exception:" + e);
}
}
}
Code demo.xsp
<?xml version="1.0" encoding="UTF-8"?>
<xp:view xmlns:xp="http://www.ibm.com/xsp/core">
<xp:this.beforePageLoad><![CDATA[#{javascript:
print("demo.xsp --> beforePageLoad ---> Hello Word");
var a = new Array();
a[0] = "mybucket-proves";
a[1] = #UserName();
var s3 = new S3();
var vector:java.util.Vector = s3.mainReadBucket(a);
var i=0;
for ( i = 0; i < vector.size(); i++) {
print("Value:" + vector.get(i));
}
}]]></xp:this.beforePageLoad>
<xp:label value="Demo" id="label1"></xp:label>
</xp:view>
New test:
Although the two bd's reside on the same server, I have an SSL Certificate Authority in the JVM in case this is the fault, but it still gives the same error. SSLHandshakeException: com.ibm.jsse2.util.j: No trusted certificate.
Note: I have tested in the main application, where the aws libraries work properly, this agent and demo.xsp page and follow the same error.
Thank you

How to use Apache Apex Malhar RabbitMQ operator in DAG

I have an Apache Apex application DAG which reads RabbitMQ message from a queue. Which Apache Apex Malhar operator should I use? There are several operators but it's not clear which one to use and how to use it.
Have you looked at https://github.com/apache/apex-malhar/tree/master/contrib/src/main/java/com/datatorrent/contrib/rabbitmq ? There are also tests in https://github.com/apache/apex-malhar/tree/master/contrib/src/test/java/com/datatorrent/contrib/rabbitmq that show how to use the operator
https://github.com/apache/apex-malhar/blob/master/contrib/src/main/java/com/datatorrent/contrib/rabbitmq/AbstractRabbitMQInputOperator.java
That is the main operator code where the tuple type is a generic parameter and emitTuple() is an abstract method that subclasses need to implement.
AbstractSinglePortRabbitMQInputOperator is a simple subclass that provides a single output port and implements emitTuple() using another abstract method getTuple() which needs an implementation in its subclasses.
The tests that Sanjay pointed to show how to use these classes.
I also had problems finding out how to read messages from RabbitMQ to Apache Apex. With the help of the provided links of Sanjay's answer (https://stackoverflow.com/a/42210636/2350644) I finally managed to get it running. Here's how it works all together:
1. Setup a RabbitMQ Server
There are lot of ways installing RabbitMQ that are described here: https://www.rabbitmq.com/download.html
The simplest way for me was using docker (See: https://store.docker.com/images/rabbitmq)
docker pull rabbitmq
docker run -d --hostname my-rabbit --name some-rabbit -p 5672:5672 -p 15672:15672 rabbitmq:3-management
To check if RabbitMQ is working, open a browser and navigate to: http://localhost:15672/. You should see the Management page of RabbitMQ.
2. Write a Producer program
To send messages to the queue you can write a simple JAVA program like this:
import com.rabbitmq.client.BuiltinExchangeType;
import com.rabbitmq.client.Channel;
import com.rabbitmq.client.Connection;
import com.rabbitmq.client.ConnectionFactory;
import java.util.ArrayList;
public class Send {
private final static String EXCHANGE = "myExchange";
public static void main(String[] args) throws Exception {
ConnectionFactory factory = new ConnectionFactory();
factory.setHost("localhost");
Connection connection = factory.newConnection();
Channel channel = connection.createChannel();
channel.exchangeDeclare(EXCHANGE, BuiltinExchangeType.FANOUT);
String queueName = channel.queueDeclare().getQueue();
channel.queueBind(queueName, EXCHANGE, "");
List<String> messages = Arrays.asList("Hello", "World", "!");
for (String msg : messages) {
channel.basicPublish(EXCHANGE, "", null, msg.getBytes("UTF-8"));
System.out.println(" [x] Sent '" + msg + "'");
}
channel.close();
connection.close();
}
}
If you execute the JAVA program you should see some outputs in the Management UI of RabbitMQ.
3. Implement a sample Apex Application
3.1 Bootstrap a sample apex application
Follow the official apex documentation http://docs.datatorrent.com/beginner/
3.2 Add additional dependencies to pom.xml
To use the classes provided by malhar add the following dependencies:
<dependency>
<groupId>org.apache.apex</groupId>
<artifactId>malhar-contrib</artifactId>
<version>3.7.0</version>
</dependency>
<dependency>
<groupId>com.rabbitmq</groupId>
<artifactId>amqp-client</artifactId>
<version>4.2.0</version>
</dependency>
3.3 Create a Consumer
We first need to create an InputOperator that consumes messages from RabbitMQ using available code from apex-malhar.
import com.datatorrent.contrib.rabbitmq.AbstractSinglePortRabbitMQInputOperator;
public class MyRabbitMQInputOperator extends AbstractSinglePortRabbitMQInputOperator<String> {
#Override
public String getTuple(byte[] message) {
return new String(message);
}
}
You only have to override the getTuple() method. In this case we simply return the message that was received from RabbitMQ.
3.4 Setup an Apex DAG
To test the application we simply add an InputOperator (MyRabbitMQInputOperator that we implemented before) that consumes data from RabbitMQ and a ConsoleOutputOperator that prints the received messages.
import com.rabbitmq.client.BuiltinExchangeType;
import org.apache.hadoop.conf.Configuration;
import com.datatorrent.api.annotation.ApplicationAnnotation;
import com.datatorrent.api.StreamingApplication;
import com.datatorrent.api.DAG;
import com.datatorrent.api.DAG.Locality;
import com.datatorrent.lib.io.ConsoleOutputOperator;
#ApplicationAnnotation(name="MyFirstApplication")
public class Application implements StreamingApplication
{
private final static String EXCHANGE = "myExchange";
#Override
public void populateDAG(DAG dag, Configuration conf)
{
MyRabbitMQInputOperator consumer = dag.addOperator("Consumer", new MyRabbitMQInputOperator());
consumer.setHost("localhost");
consumer.setExchange(EXCHANGE);
consumer.setExchangeType(BuiltinExchangeType.FANOUT.getType());
ConsoleOutputOperator cons = dag.addOperator("console", new ConsoleOutputOperator());
dag.addStream("myStream", consumer.outputPort, cons.input).setLocality(Locality.CONTAINER_LOCAL);
}
}
3.5 Test the Application
To simply test the created application we can write a UnitTest, so there is no need to setup a Hadoop/YARN cluster.
In the bootstrap application there is already a UnitTest namely ApplicationTest.java that we can use:
import java.io.IOException;
import javax.validation.ConstraintViolationException;
import org.junit.Assert;
import org.apache.hadoop.conf.Configuration;
import org.junit.Test;
import com.datatorrent.api.LocalMode;
/**
* Test the DAG declaration in local mode.
*/
public class ApplicationTest {
#Test
public void testApplication() throws IOException, Exception {
try {
LocalMode lma = LocalMode.newInstance();
Configuration conf = new Configuration(true);
//conf.addResource(this.getClass().getResourceAsStream("/META-INF/properties.xml"));
lma.prepareDAG(new Application(), conf);
LocalMode.Controller lc = lma.getController();
lc.run(10000); // runs for 10 seconds and quits
} catch (ConstraintViolationException e) {
Assert.fail("constraint violations: " + e.getConstraintViolations());
}
}
}
Since we don't need any properties for this application the only thing changed in this file is uncommenting the line:
conf.addResource(this.getClass().getResourceAsStream("/META-INF/properties.xml"));
If you execute the ApplicationTest.java and send messages to RabbitMQ using the Producer program as described in 2., the Test should output all the messages.
You might need to increase the time of the test to see all messages (It is set to 10sec currently).

Jersey client - Invalid signature file digest for Manifest main attributes

I'm new to JAX-RS and trying to figure out what is happening here:
I have a simple Hello World Jersey REST service running on Glassfish (Eclipse plugin). I can access it successfully from a browser.
Now, I'd like to call it from a Java class (so I can build JUnit tests around it) but I get this error on buildGet() method:
java.lang.SecurityException: Invalid signature file digest for Manifest main attributes
Unless some magic I'm not aware of happens, I'm not packaging my service and/or client in any jar so it's not related to my application jar signature.
Anyone could explain what I'm doing wrong?
Why is the exception triggered on buildGet() metod and not on any method called before?
My main:
package com.test;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.Invocation;
import javax.ws.rs.client.WebTarget;
import javax.ws.rs.core.Response;
public class HelloTest {
public static void main(String[] args)
{
Client client = ClientBuilder.newClient();
Response response = null;
try {
WebTarget webTarget = client.target("http://localhost:9595/Hello/api/ping");
Invocation helloInvocation = webTarget.request().buildGet();
response = helloInvocation.invoke();
}
catch (Throwable ex) {
System.out.println(ex.getMessage());
}
finally {
response.close();
}
}
}
My service:
package com.api;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
#Path("ping")
public class Hello
{
#GET
#Produces(MediaType.TEXT_HTML)
public String sayHtmlHello()
{
return "<html>" + "<title>" + "Hello" + "</title>"
+ "<body><h1>" + "Hello!!!" + "</body></h1>" + "</html>";
}
}
After struggling on this for a while, it seems that my Maven configuration had issues and some dependencies were not downloaded/built correctly. I restarted a new project, copied my source files and everything started to work as expected.

How can I pass hostname as a parameter for ssh in guacamole

I tried to build a demo guacamole app for ssh from the below tutorial.
http://guac-dev.org/doc/gug/writing-you-own-guacamole-app.html
The app worked just fine as long as the values were hardcoded. But I need to get the hostname/IP from the user. To achieve that I tried using request.getParameter() in the below code :
package org.glyptodon.guacamole.net.example;
import javax.servlet.http.HttpServletRequest;
import org.glyptodon.guacamole.GuacamoleException;
import org.glyptodon.guacamole.net.GuacamoleSocket;
import org.glyptodon.guacamole.net.GuacamoleTunnel;
import org.glyptodon.guacamole.net.InetGuacamoleSocket;
import org.glyptodon.guacamole.net.SimpleGuacamoleTunnel;
import org.glyptodon.guacamole.protocol.ConfiguredGuacamoleSocket;
import org.glyptodon.guacamole.protocol.GuacamoleConfiguration;
import org.glyptodon.guacamole.servlet.GuacamoleHTTPTunnelServlet;
public class TutorialGuacamoleTunnelServlet
extends GuacamoleHTTPTunnelServlet {
#Override
protected GuacamoleTunnel doConnect(HttpServletRequest request)
throws GuacamoleException {
// Create our configuration
String hostname = request.getParameter("hostname");
GuacamoleConfiguration config = new GuacamoleConfiguration();
config.setProtocol("ssh");
config.setParameter("hostname", hostname);
config.setParameter("port", "22");
// Connect to guacd - everything is hard-coded here.
GuacamoleSocket socket = new ConfiguredGuacamoleSocket(
new InetGuacamoleSocket("localhost", 4822),
config
);
// Return a new tunnel which uses the connected socket
return new SimpleGuacamoleTunnel(socket);
}
}
But when I try to use it like localhost:8080/guacamole-tutorial-0.9.9?hostname=localhost, it doesn't work. Whereas it works just fine if I hardcode the same values.
Please help me out.
you'll need to use JavaScript to pass those parameters to the connect() function of Guacamole.Client. got from here
<script type="text/javascript">
// Get display div from document
var display = document.getElementById("display");
// Instantiate client, using an HTTP tunnel for communications.
var guac = new Guacamole.Client(
new Guacamole.HTTPTunnel("tunnel")
);
// Add client to display div
display.appendChild(guac.getDisplay().getElement());
// Error handler
guac.onerror = function(error) {
alert(error);
console.log(error);
};
// Connect
guac.connect('ip=192.168.99.100&user=root');** set parameters here**
// Disconnect on close
window.onunload = function() {
guac.disconnect();
}
</script>
and In your TutorialGuacamoleTunnelServlet access these as
config.setProtocol("ssh");
config.setParameter("hostname", request.getParameter("ip"));
config.setParameter("username", request.getParameter("user"));

Jetty - how to only allow requests from a specific domain

I have exposed an api provided by a jetty server to a front-end application. I want to make sure that only the front-end application (from a certain domain) has access to that api - any other requests should be unauthorised.
What's the best best way of implementing this security feature?
Update: I have set up a CrossOriginFilter - however, I can still access the api via basic GET request from my browser.
Thanks!
Use the IPAccessHandler to setup whitelists and blacklists.
Example: this will allow 127.0.0.* and 192.168.1.* to access everything.
But 192.168.1.132 cannot access /home/* content.
package jetty.demo;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.server.handler.IPAccessHandler;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
public class IpAccessExample
{
public static void main(String[] args)
{
System.setProperty("org.eclipse.jetty.servlet.LEVEL","DEBUG");
System.setProperty("org.eclipse.jetty.server.handler.LEVEL","DEBUG");
Server server = new Server();
ServerConnector connector = new ServerConnector(server);
connector.setPort(8080);
server.addConnector(connector);
// Setup IPAccessHandler
IPAccessHandler ipaccess = new IPAccessHandler();
ipaccess.addWhite("127.0.0.0-255|/*");
ipaccess.addWhite("192.168.1.1-255|/*");
ipaccess.addBlack("192.168.1.132|/home/*");
server.setHandler(ipaccess);
// Setup the basic application "context" for this application at "/"
// This is also known as the handler tree (in jetty speak)
ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
context.setContextPath("/");
// make context a subordinate of ipaccess
ipaccess.setHandler(context);
// The filesystem paths we will map
String homePath = System.getProperty("user.home");
String pwdPath = System.getProperty("user.dir");
// Fist, add special pathspec of "/home/" content mapped to the homePath
ServletHolder holderHome = new ServletHolder("static-home", DefaultServlet.class);
holderHome.setInitParameter("resourceBase",homePath);
holderHome.setInitParameter("dirAllowed","true");
holderHome.setInitParameter("pathInfoOnly","true");
context.addServlet(holderHome,"/home/*");
// Lastly, the default servlet for root content
// It is important that this is last.
ServletHolder holderPwd = new ServletHolder("default", DefaultServlet.class);
holderPwd.setInitParameter("resourceBase",pwdPath);
holderPwd.setInitParameter("dirAllowed","true");
context.addServlet(holderPwd,"/");
try
{
server.start();
server.join();
}
catch (Throwable t)
{
t.printStackTrace(System.err);
}
}
}
Or alternatively, write your own Handler to filter based on some other arbitrary rule.
Such as looking for a required request header, something that your specific front-end application provides, but a browser would not.
package jetty.demo;
import java.io.IOException;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import org.eclipse.jetty.http.HttpStatus;
import org.eclipse.jetty.server.Request;
import org.eclipse.jetty.server.handler.HandlerWrapper;
public class BanBrowserHandler extends HandlerWrapper
{
#Override
public void handle(String target, Request baseRequest, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException
{
String xfe = request.getHeader("X-FrontEnd");
if ((xfe == null) || (!xfe.startsWith("MagicApp-")))
{
// not your front-end
response.sendError(HttpStatus.FORBIDDEN_403);
baseRequest.setHandled(true);
return;
}
getHandler().handle(target,baseRequest,request,response);
}
}
The class IPAccessHandler is deprecated. The InetAccessHandler is recommended.
org.eclipse.jetty.server.Server server = ...;
InetAccessHandler ipaccess = new InetAccessHandler();
ipaccess.include(clientIP);
ipaccess.setHandler(server.getHandler());
server.setHandler(ipaccess);