Using JOOQ in AWS Lambda - orm

Have anyone tried using JOOQ with AWS lambda? I am using the same working config I have from my previous project that runs on tomcat and transitioned it to lambda. Lambda is not getting passed the query part. However if I run standard prepared statement everything is working ok so it is not a VPC/access or any other infrastructure issue.
Jooq code:
final Street street = DSL.using(configuration).select(STREET.fields()).from(STREET) .where(STREET.ADDRESS.eq(message.address()) .fetchOneInto(Street.class));
I am initializing config:
#Bean
public DefaultConfiguration configuration() {
DefaultConfiguration jooqConfiguration = new DefaultConfiguration();
jooqConfiguration.set(connectionProvider());
jooqConfiguration.set(new DefaultExecuteListenerProvider(exceptionTransformer()));
String sqlDialectName = environment.getRequiredProperty("jooq.sql.dialect");
SQLDialect dialect = SQLDialect.valueOf(sqlDialectName);
jooqConfiguration.set(dialect);
return jooqConfiguration;
}

My bad, there was an issue with the query. JOOQ works great!

Related

move from imperative try-with-resource to reactive using, using()

i'm trying to move from imperative try-with-resources to reactive try-with-resources without success. I have the following piece of code i would like to move.
private final AmazonS3 amazonS3;
private final String bucket;
#Override
public Mono<String> getTemplate(String templateId) {
return Mono.fromCallable(() -> {
S3Object s3Object = amazonS3.getObject(bucket, templateId);
try (s3Object) {
return IOUtils.toString(s3Object.getObjectContent());
}
}).subscribeOn(Schedulers.boundedElastic());
}
I would like to rewrite using reactive try-with-resources construct.
My first try was using Flux.using
Flux.using(amazonS3.getObject(bucket, templateId),
s3Object -> Flux.just(IOUtils.toString(s3Object.getObjectContent())),
S3Object::close);
the s3Object is not being casted as an S3Object so getObjectContent doesn't exist.
Then i had a look athttps://projectreactor.io/docs/core/release/reference/ and i guess that i might use Disposable, however i'm not sure how to wrap S3Object with a disposable object.
Does anyone know how can i make it work?
Thanks
You can't achieve this with the approach you're taking. It's literally impossible to take a blocking API like the one you see here (AWS SDK v1) and somehow wrap it to make it reactive.
You can however use the AWS SDK v2 (you should be using this anyway for new development), which has an asynchronous S3 client (S3AsyncClient) that you can use to return a CompleteableFuture<String>:
CompletableFuture<String> contents = s3AsyncClient
.getObject(GetObjectRequest.builder().build(), new ByteArrayAsyncResponseTransformer<>())
.thenApplyAsync(rb -> rb.asUtf8String());
You can then use Mono.fromFuture(contents) to obtain a Mono<String> from the above CompleteableFuture.

Kafka to Flink to Hive - Writes failing

I am trying to Sink data to Hive via Kafka -> Flink -> Hive using following code snippet:
But I am getting following error:
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<GenericRecord> stream = readFromKafka(env);
private static final TypeInformation[] FIELD_TYPES = new TypeInformation[]{
BasicTypeInfo.INT_TYPE_INFO, BasicTypeInfo.STRING_TYPE_INFO
};
JDBCAppendTableSink sink = JDBCAppendTableSink.builder()
.setDrivername("org.apache.hive.jdbc.HiveDriver")
.setDBUrl("jdbc:hive2://hiveconnstring")
.setUsername("myuser")
.setPassword("mypass")
.setQuery("INSERT INTO testHiveDriverTable (key,value) VALUES (?,?)")
.setBatchSize(1000)
.setParameterTypes(FIELD_TYPES)
.build();
DataStream<Row> rows = stream.map((MapFunction<GenericRecord, Row>) st1 -> {
Row row = new Row(2); //
row.setField(0, st1.get("SOME_ID"));
row.setField(1, st1.get("SOME_ADDRESS"));
return row;
});
sink.emitDataStream(rows);
env.execute("Flink101");
Caused by: java.lang.RuntimeException: Execution of JDBC statement failed.
at org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.flush(JDBCOutputFormat.java:219)
at org.apache.flink.api.java.io.jdbc.JDBCSinkFunction.snapshotState(JDBCSinkFunction.java:43)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118)
at org.apache.flink.streaming.util.functions.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:90)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:356)
... 12 more
Caused by: java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveStatement.executeBatch(HiveStatement.java:381)
at org.apache.flink.api.java.io.jdbc.JDBCOutputFormat.flush(JDBCOutputFormat.java:216)
... 17 more
I checked hive-jdbc driver and it seems that the Method is not supported in hive-jdbc driver.
public class HiveStatement implements java.sql.Statement {
...
#Override
public int[] executeBatch() throws SQLException {
throw new SQLFeatureNotSupportedException("Method not supported");
}
..
}
Is there any way we can achieve this using JDBC Driver ?
Let me know,
Thanks in advance.
Hive's JDBC implementation is not complete yet. Your problem is tracked by this issue.
You could try to patch Flink's JDBCOutputFormat to not use batching by replacing upload.addBatch with upload.execute in JDBCOutputFormat.java:202 and remove the call to upload.executeBatch in JDBCOutputFormat.java:216. The down side will be that you issue for every record a dedicated SQL query which might slow down things.

How to Register a Function without params with Spark SQL Java API

One can register a function using Scala:
spark.udf.register("uuid", ()=>java.util.UUID.randomUUID.toString)
Now, if I use Java API:
spark.udf().register("uuid", ()=>java.util.UUID.randomUUID().toString());
The code does not compile. So how can we do this in Java?
If you are using java7 or earlier version
sqlContext().udf().register("uuid", new UDF0() {
#Override
public String call() {
return java.util.UUID.randomUUID().toString();
}
}, DataTypes.StringType);
And if you are using java8 and later version you can use lambda
sparkSession.udf().register("uuid", () -> java.util.UUID.randomUUID().toString(),DataTypes.StringType);

Apache Geode RegionExistsException

In the Pivotal Native Client I've setup a method to read and write a Geode cache region as follows:
public void GeodePut(string region, string key, string value)
{
CacheFactory cF = CacheFactory.CreateCacheFactory();
Cache c cF.Create();
RegionFactory rF = c.CreateRegionFactory(RegionShortcut.CACHING_PROXY);
IRegion<string, string> r = rF.Create<string, string>(region);
r[key] = value;
cache.Close();
}
when I call this multiple times I get RegionExistsException how do I get around that? Thanks
Solution is easy.
Add a try-catch block to catch the RegionExistsException, then in the catch segment replace the 'create' method with 'get'.
Change this: rF.Create
for this: rf.get
This works pretty well using Java, i would post the exact signature of the method you needed but im not using .Net native client.
Hope it helps :)
It's to do with the cache.Close() command. I no longer use cache.Close()

What is "bw_and" in hibernate query

I found a strange query implemented in my project, when I debug and inspect the persistance.query Object just before it call getResultList() method, the queryString I found is :
FROM AuthorityTbl a WHERE bw_and(a.setupFiltersIn, :setupFiltersIn) <> 0
This query is working fine and fetching all data from authority table where setupFiltersIn = :setupFiltersIn.
I am not able to understand yet what is bw_and in this query syntax.
Could anyone have any idea?
I am using sqlServer2014 and direct query with bw_and is not acceptable by sqlServer.
In My application below class is used which register bw_and as bitwise operator
public class ExtendedMSSQLServerDialect extends SQLServerDialect {
public ExtendedMSSQLServerDialect() {
super();
registerFunction("bw_and", new BitwiseSQLFunction(BitwiseSQLOperator.AND, "bw_and"));
registerFunction("bw_or", new BitwiseSQLFunction(BitwiseSQLOperator.OR, "bw_or"));
registerFunction("cast_text_to_varchar_of_length", new CastTextToVarcharSQLFunction("cast_text_to_varchar_of_length"));
}
}