With reference to the solution provided in How to use TLS 1.2 in Java 6, is it possible to use the TSLSocketConnectionFactory with the Apache HttpClient4.4.
Regards,
j
You should be able to use TSLSocketConnectionFactory with HttpClient like following:
SSLConnectionSocketFactory sf = new SSLConnectionSocketFactory(new TLSSocketConnectionFactory(), new String[]{"TLSv1.2"}, null, new DefaultHostnameVerifier());
HttpClient client = HttpClientBuilder.create()
.setSSLSocketFactory(sf)
.build();
You may need to change some of the SSLSession method implementations at the TSLSocketConnectionFactory.
In my case, when I tried to use it with HttpClient, I had to change the following:
At SSLSocket() implementation:
#Override
public String[] getEnabledCipherSuites() {
// return null;
return new String[]{""};
}
#Override
public String[] getEnabledProtocols() {
// return null;
return new String[]{""};
}
At SSLSession() implementation:
#Override
public String getProtocol() {
// throw new UnsupportedOperationException();
return null;
}
#Override
public String getProtocol() {
// throw new UnsupportedOperationException();
return "";
}
#Override
public String getCipherSuite() {
// throw new UnsupportedOperationException();
return "":
}
Related
I've recently been learning about how to use the Kafka Streams client and one thing that I've been struggling with is how to switch from the default state store (RocksDB) to a custom state store using something like Redis. The Confluent documentation makes it clear you have to implement the StateStore interface for your custom store and you must provide an implementation of StoreBuilder for creating instances of that store.
Here's what I have so far for my custom store. I've also added a simple write method to append a new entry into a specified stream via the Redis XADD command.
public class MyCustomStore<K,V> implements StateStore, MyWriteableCustomStore<K,V> {
private String name;
private volatile boolean open = false;
private boolean loggingEnabled = false;
public MyCustomStore(String name, boolean loggingEnabled) {
this.name = name;
this.loggingEnabled = loggingEnabled;
}
#Override
public String name() {
return this.name;
}
#Override
public void init(ProcessorContext context, StateStore root) {
if (root != null) {
// register the store
context.register(root, (key, value) -> {
write(key.toString(), value.toString());
});
}
this.open = true;
}
#Override
public void flush() {
// TODO Auto-generated method stub
}
#Override
public void close() {
// TODO Auto-generated method stub
}
#Override
public boolean persistent() {
// TODO Auto-generated method stub
return true;
}
#Override
public boolean isOpen() {
// TODO Auto-generated method stub
return false;
}
#Override
public void write(String key, String value) {
try(Jedis jedis = new Jedis("localhost", 6379)) {
Map<String, String> hash = new HashMap<>();
hash.put(key, value);
jedis.xadd("MyStream", StreamEntryID.NEW_ENTRY, hash);
}
}
}
public class MyCustomStoreBuilder implements StoreBuilder<MyCustomStore<String,String>> {
private boolean cached = true;
private String name;
private Map<String,String> logConfig=new HashMap<>();
private boolean loggingEnabled;
public MyCustomStoreBuilder(String name, boolean loggingEnabled){
this.name = name;
this.loggingEnabled = loggingEnabled;
}
#Override
public StoreBuilder<MyCustomStore<String,String>> withCachingEnabled() {
this.cached = true;
return this;
}
#Override
public StoreBuilder<MyCustomStore<String,String>> withCachingDisabled() {
this.cached = false;
return null;
}
#Override
public StoreBuilder<MyCustomStore<String,String>> withLoggingEnabled(Map<String, String> config) {
loggingEnabled=true;
return this;
}
#Override
public StoreBuilder<MyCustomStore<String,String>> withLoggingDisabled() {
this.loggingEnabled = false;
return this;
}
#Override
public MyCustomStore<String,String> build() {
return new MyCustomStore<String,String>(this.name, this.loggingEnabled);
}
#Override
public Map<String, String> logConfig() {
return logConfig;
}
#Override
public boolean loggingEnabled() {
return loggingEnabled;
}
#Override
public String name() {
return name;
}
}
And here's what my setup and topology look like.
#Bean
public KafkaStreams kafkaStreams(KafkaProperties kafkaProperties) {
final Properties props = new Properties();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(StreamsConfig.APPLICATION_ID_CONFIG, appName);
props.put(AbstractKafkaSchemaSerDeConfig.SCHEMA_REGISTRY_URL_CONFIG, schemaRegistryUrl);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Long().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Double().getClass());
props.put(StreamsConfig.STATE_DIR_CONFIG, "data");
props.put(StreamsConfig.APPLICATION_SERVER_CONFIG, appServerConfig);
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, JsonNode.class);
props.put(DEFAULT_DESERIALIZATION_EXCEPTION_HANDLER_CLASS_CONFIG, LogAndContinueExceptionHandler.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
final String storeName = "the-custome-store";
Topology topology = new Topology();
// Create CustomStoreSupplier for store name the-custom-store
MyCustomStoreBuilder customStoreBuilder = new MyCustomStoreBuilder(storeName, false);
topology.addSource("input","inputTopic");
topology.addProcessor("redis-processor", () -> new RedisProcessor(storeName), "input");
topology.addStateStore(customStoreBuilder, "redis-processor");
KafkaStreams kafkaStreams = new KafkaStreams(topology, props);
kafkaStreams.start();
return kafkaStreams;
}
public class MyCustomStoreType<K,V> implements QueryableStoreType<MyReadableCustomStore<String,String>> {
#Override
public boolean accepts(StateStore stateStore) {
return stateStore instanceof MyCustomStore;
}
#Override
public MyReadableCustomStore<String,String> create(final StateStoreProvider storeProvider, final String storeName) {
return new MyCustomStoreTypeWrapper<>(storeProvider, storeName, this);
}
}
public class MyCustomStoreTypeWrapper<K,V> implements MyReadableCustomStore<K,V> {
private final QueryableStoreType<MyReadableCustomStore<String, String>> customStoreType;
private final String storeName;
private final StateStoreProvider provider;
public MyCustomStoreTypeWrapper(final StateStoreProvider provider,
final String storeName,
final QueryableStoreType<MyReadableCustomStore<String, String>> customStoreType) {
this.provider = provider;
this.storeName = storeName;
this.customStoreType = customStoreType;
}
#Override
public String read(String key) {
try (Jedis jedis = new Jedis("localhost", 6379)) {
StreamEntryID start = new StreamEntryID(0, 0);
StreamEntryID end = null; // null -> until the last item in the stream
int count = 2;
List<StreamEntry> list = jedis.xrange("MyStream", start, end, count);
if (list != null) {
// Get the most recently added item, which is also the last item
StreamEntry streamData = list.get(list.size() - 1);
return streamData.toString();
} else {
System.out.println("No new data in the stream");
}
return "";
}
}
}
// This throws the InvalidStateStoreException when I try to get access to the custom store
MyReadableCustomStore<String,String> store = streams.store("the-custome-store", new MyCustomStoreType<String,String>());
String value = store.read("testKey");
So, my question is how do I actually get the state store data to persist into Redis now? I feel like I'm missing something in the state store initialization or with the StateRestoreCallback. Any help or clarification with this would be greatly appreciated.
It looks to me that you have the store wired up to the topology correctly. But you don't have any processors using the store.
It could look something like this:
final String storeName = "the-custome-store";
MyCustomStoreBuilder customStoreBuilder = new MyCustomStoreBuilder(storeName, false);
Topology topology = new Topology()
topology.addSource("input", "input-topic");
// makes the processor a child of the source node
// the source node forwards its records to the child processor node
topology.addProcessor("redis-processor", () -> new RedisProcessor(storeName), "input");
// add the store and specify the processor(s) that access the store
topology.addStateStore(storeBuilder, "redis-processor");
class RedisProcessor implements Processor<byte[], byte[]> {
final String storeName;
MyCustomStore<byte[],byte[]> stateStore;
public RedisProcessor(String storeName) {
this.storeName = storeName;
}
#Override
public void init(ProcessorContext context) {
stateStore = (MyCustomeStore<byte[], byte[]>) context.getStateStore(storeName);
}
#Override
public void process(byte[] key, byte[] value) {
stateStore.write(key, value);
}
#Override
public void close() {
}
}
HTH, and let me know how it works out for you.
Update to answer from comments:
I think you need to update MyCustomStore.isOpen() to return the open variable.
Right now it's hardcoded to return false
Override
public boolean isOpen() {
// TODO Auto-generated method stub
return false;
}
I'm a new developer trying to tackle the Jackrabbit/JCR library. My team and I have been using the EveryonePrincipal for a couple of months now, but we've been wanting to implement more capabilities per user roles/principals so we can grant them necessary read/write access to each node. However, we're having some difficulties figuring out how to create a new Principal object.
I've been using:
PrincipalImpl newPrincipal = new PrincipalImpl("MyPrincipal");
Then creating a new RolePrincipal class matching the EveryonePrincipal, except the name would be "MyPrincipal" inside the RolePrincipal class. This method doesn't work unfortunately. Is there anything else we're missing from this? And how does the 'everyone' principal gets stored?
EveryonePrincipal.java
public final class EveryonePrincipal implements JackrabbitPrincipal, java.security.acl.Group {
public static final String NAME = "everyone";
private static final EveryonePrincipal INSTANCE = new EveryonePrincipal();
private EveryonePrincipal() { }
public static EveryonePrincipal getInstance() {
return INSTANCE;
}
//----------------------------------------------------------< Principal >---
#Override
public String getName() {
return NAME;
}
//--------------------------------------------------------------< Group >---
#Override
public boolean addMember(Principal user) {
return false;
}
#Override
public boolean removeMember(Principal user) {
throw new UnsupportedOperationException("Cannot remove a member from the everyone group.");
}
#Override
public boolean isMember(Principal member) {
return !member.equals(this);
}
#Override
public Enumeration<? extends Principal> members() {
throw new UnsupportedOperationException("Not implemented.");
}
//-------------------------------------------------------------< Object >---
#Override
public int hashCode() {
return NAME.hashCode();
}
#Override
public boolean equals(Object obj) {
if (obj == this) {
return true;
} else if (obj instanceof JackrabbitPrincipal && obj instanceof Group) {
JackrabbitPrincipal other = (JackrabbitPrincipal) obj;
return NAME.equals(other.getName());
}
return false;
}
#Override
public String toString() {
return NAME + " principal";
}
}
I've seen some related questions here, but none worked for me, the rabbit will not serialize my message coming from another application.
Caused by: org.springframework.amqp.AmqpException: No method found for class [B
Below my configuration class to receive the messages.
#Configuration
public class RabbitConfiguration implements RabbitListenerConfigurer{
public final static String EXCHANGE_NAME = "wallet-accounts";
public final static String QUEUE_PAYMENT = "wallet-accounts.payment";
public final static String QUEUE_RECHARGE = "wallet-accounts.recharge";
#Bean
public List<Declarable> ds() {
return queues(QUEUE_PAYMENT, QUEUE_RECHARGE);
}
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnectionFactory);
}
#Bean
public TopicExchange exchange() {
return new TopicExchange(EXCHANGE_NAME);
}
private List<Declarable> queues(String ... names){
List<Declarable> result = new ArrayList<>();
for (int i = 0; i < names.length; i++) {
result.add(makeQueue(names[i]));
result.add(makeBinding(names[i]));
}
return result;
}
private static Binding makeBinding(String queueName){
return new Binding(queueName, DestinationType.QUEUE, EXCHANGE_NAME, queueName, null);
}
private static Queue makeQueue(String name){
return new Queue(name);
}
#Bean
public MappingJackson2MessageConverter jackson2Converter() {
MappingJackson2MessageConverter converter = new MappingJackson2MessageConverter();
return converter;
}
#Bean
public DefaultMessageHandlerMethodFactory myHandlerMethodFactory() {
DefaultMessageHandlerMethodFactory factory = new DefaultMessageHandlerMethodFactory();
factory.setMessageConverter(jackson2Converter());
return factory;
}
#Override
public void configureRabbitListeners(RabbitListenerEndpointRegistrar registrar) {
registrar.setMessageHandlerMethodFactory(myHandlerMethodFactory());
}
}
Using this other configuration, the error is almost the same:
Caused by: org.springframework.amqp.support.converter.MessageConversionException: failed to resolve class name. Class not found [br.com.beblue.wallet.payment.application.accounts.PaymentEntryCommand]
Configuration:
#Configuration
public class RabbitConfiguration {
public final static String EXCHANGE_NAME = "wallet-accounts";
public final static String QUEUE_PAYMENT = "wallet-accounts.payment";
public final static String QUEUE_RECHARGE = "wallet-accounts.recharge";
#Bean
public List<Declarable> ds() {
return queues(QUEUE_PAYMENT, QUEUE_RECHARGE);
}
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnectionFactory);
}
#Bean
public TopicExchange exchange() {
return new TopicExchange(EXCHANGE_NAME);
}
#Bean
public MessageConverter jsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
private List<Declarable> queues(String ... names){
List<Declarable> result = new ArrayList<>();
for (int i = 0; i < names.length; i++) {
result.add(makeQueue(names[i]));
result.add(makeBinding(names[i]));
}
return result;
}
private static Binding makeBinding(String queueName){
return new Binding(queueName, DestinationType.QUEUE, EXCHANGE_NAME, queueName, null);
}
private static Queue makeQueue(String name){
return new Queue(name);
}
}
Can anyone tell me what's wrong with these settings, or what's missing?
No method found for class [B
Means there is a default SimpleMessageConverter which can't convert your incoming application/json. It is just not aware of that content-type and just falls back to the byte[] to return.
Class not found [br.com.beblue.wallet.payment.application.accounts.PaymentEntryCommand]
Means that Jackson2JsonMessageConverter can't convert your application/json because the incoming __TypeId__ header, representing class of the content, cannot be found in the local classpath.
Well, definitely your configuration for the DefaultMessageHandlerMethodFactory does not make sense for the AMQP conversion. You should consider to use SimpleRabbitListenerContainerFactory bean definition and its setMessageConverter. And yes, consider to inject the proper org.springframework.amqp.support.converter.MessageConverter implementation.
https://docs.spring.io/spring-amqp/docs/1.7.3.RELEASE/reference/html/_reference.html#async-annotation-conversion
From the Spring Boot perspective there is SimpleRabbitListenerContainerFactoryConfigurer to configure on the matter:
https://docs.spring.io/spring-boot/docs/1.5.6.RELEASE/reference/htmlsingle/#boot-features-using-amqp-receiving
Quite new with Dropwizard.
I found a lot of solution to deal with Jersey and ssl self signed certificate.
Dropwizard version is 0.9.2
I have tried to set a SSLContext but I get
The method sslContext(SSLContext) is undefined for the type JerseyClientBuilder
Code:
TrustManager[] certs = new TrustManager[]{
new X509TrustManager() {
#Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
#Override
public void checkServerTrusted(X509Certificate[] chain, String authType)
throws CertificateException {
}
#Override
public void checkClientTrusted(X509Certificate[] chain, String authType)
throws CertificateException {
}
}
};
public static class TrustAllHostNameVerifier implements HostnameVerifier {
public boolean verify(String hostname, SSLSession session) {
return true;
}
}
private Client getWebClient(AppConfiguration configuration, Environment env) {
SSLContext ctx = SSLContext.getInstance("SSL");
ctx.init(null, certs, new SecureRandom());
Client client = new JerseyClientBuilder(env)
.using(configuration.getJerseyClient())
.sslContext(ctx)
.build("MyClient");
return client;
}
The configuration part:
private JerseyClientConfiguration jerseyClient = new JerseyClientConfiguration();
public JerseyClientConfiguration getJerseyClient() {
return jerseyClient;
}
I have found a simple solution just using the configuration
jerseyClient:
tls:
verifyHostname: false
trustSelfSignedCertificates: true
I think to create an insecure client in 0.9.2 you would use a Registry of ConnectionSocketFactory, something like...
final SSLContext sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, new TrustManager[] { new X509TrustManager() {
#Override
public void checkClientTrusted(X509Certificate[] x509Certificates, String s)
throws java.security.cert.CertificateException {
}
#Override
public void checkServerTrusted(X509Certificate[] x509Certificates, String s)
throws java.security.cert.CertificateException {
}
#Override
public X509Certificate[] getAcceptedIssuers() {
return new X509Certificate[0];
}
} }, new SecureRandom());
final SSLConnectionSocketFactory sslConnectionSocketFactory =
new SSLConnectionSocketFactory(sslContext, SSLConnectionSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
final Registry<ConnectionSocketFactory> registry = RegistryBuilder.<ConnectionSocketFactory>create()
.register("https", sslConnectionSocketFactory)
.register("http", PlainConnectionSocketFactory.INSTANCE)
.build();
builder.using(registry);
Client client = new JerseyClientBuilder(env)
.using(configuration.getJerseyClient())
.using(registry)
.build("MyInsecureClient");
I'd like to exclude all properties with values less than zero. Does Jackson have some ready to use solutions or I should create CustomSerializerFactory and BeanPropertyWriter?
You can do this by using a filter - it's a little verbose but it does the job.
First - you need to specify the filter on your entity:
#JsonFilter("myFilter")
public class MyDtoWithFilter { ...}
Then, you need to specify your custom filter, which will look at the values:
PropertyFilter theFilter = new SimpleBeanPropertyFilter() {
#Override
public void serializeAsField(Object pojo, JsonGenerator jgen, SerializerProvider provider, PropertyWriter writer) throws Exception {
if (include(writer)) {
if (writer.getName().equals("intValue")) {
int intValue = ((MyDtoWithFilter) pojo).getIntValue();
if (intValue < 0) {
writer.serializeAsField(pojo, jgen, provider);
}
} else {
writer.serializeAsField(pojo, jgen, provider);
}
} else if (!jgen.canOmitFields()) { // since 2.3
writer.serializeAsOmittedField(pojo, jgen, provider);
}
}
#Override
protected boolean include(BeanPropertyWriter writer) {
return true;
}
#Override
protected boolean include(PropertyWriter writer) {
return true;
}
};
This is done for a field called intValue but you can do the same for all your fields that need to be positive in a similar way.
Finally, not you can marshall an object and test that it works:
FilterProvider filters = new SimpleFilterProvider().addFilter("myFilter", theFilter);
MyDtoWithFilter dtoObject = new MyDtoWithFilter();
dtoObject.setIntValue(12);
ObjectMapper mapper = new ObjectMapper();
String dtoAsString = mapper.writer(filters).writeValueAsString(dtoObject);
Hope this helps.