Data loss while inserting data to BigQuery using Apache beam BigQueryIO - google-bigquery

I am using below code to insert data to BQ using apache Beam BigQueryIO. I read data from kafka(Beam KafkaIO) and process it and create Pcollection of String and then stream it to BQ. While writing data to BQ it is not writing all records to Table. It doesn't throw any exception also.
public class ConvertToTableRow extends DoFn<String, TableRow> {
/**
*
*/
private static final long serialVersionUID = 1L;
private StatsDClient statsdClient;
private String statsDHost;
private int statsDPort = 9125;
public ConvertToTableRow(String statsDHost) {
this.statsDHost = statsDHost;
}
#Setup
public void startup() {
this.statsdClient = new NonBlockingStatsDClient("Metric", statsDHost, statsDPort);
}
#ProcessElement
public void processElement(#Element String record, ProcessContext context) {
try {
statsdClient.incrementCounter("bq.message");
TableRow row = new TableRow();
row.set("name", "Value");
Long timestamp = System.currentTimeMillis();
DateFormat dateFormater = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSS");
Date date = new Date(timestamp);
String insertDate = dateFormater.format(date);
row.set("insert_date", insertDate);
context.output(row);
} catch (Exception e) {
statsdClient.incrementCounter("exception.bq.message");
}
}
#Teardown
public void teardown() {
this.statsdClient.close();
}
}
private void streamWriteOutputToBQ(PCollection<TableRow> bqTableRows) {
String tableSchema = //tableSchema;
bqTableRows
.apply((BigQueryIO.writeTableRows().skipInvalidRows().withMethod(Method.STREAMING_INSERTS)
.to("myTable").withJsonSchema(tableSchema)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)));
}
I am not sure, if i am missing any configuration for BigQueryIO

Related

Add weights to documents Lucene8+solr 8 while indexing

I am working on migrating solr from 5.4.3 to 8.11 for one of my search apps and successfully upgraded to 7.7.3. But for further upgradations facing the order of the response data being changed than it was earlier. Here I am trying to use FunctionScoreQuery along with DoubleValuesSource since CustomScoreQuery is deprecated in 7.7.3 and removed in 8.
Below is my code snippet (now I am using solr 8.5.2 and Lucene 8.5.2)
public class CustomQueryParser extends QParserPlugin {
#Override
public QParser createParser(final String qstr, final SolrParams localParams, final SolrParams params,
final SolrQueryRequest req) {
return new MyParser(qstr, localParams, params, req);
}
private static class MyParser extends QParser {
private Query innerQuery;
private String queryString;
public MyParser(final String qstr, final SolrParams localParams, final SolrParams params,
final SolrQueryRequest req) {
super(qstr, localParams, params, req);
if (qstr == null || qstr.trim().length() == 0) {
this.queryString = DEFAULT_SEARCH_QUERY;
setString(this.queryString);
} else {
this.queryString = qstr;
}
try {
if (queryString.contains(":")) {
final QParser parser = getParser(queryString, "edismax", getReq());
this.innerQuery = parser.parse();
} else {
final QParser parser = getParser(queryString, "dismax", getReq());
this.innerQuery = parser.parse();
}
} catch (final SyntaxError ex) {
throw new RuntimeException("Error parsing query", ex);
}
}
#Override
public Query parse() throws SyntaxError{
final Query query = new MyCustomQuery(innerQuery);
final CustomValuesSource customValuesSource = new CustomValuesSource(queryString,innerQuery);
final FunctionScoreQuery fsq = FunctionScoreQuery.boostByValue(query, customValuesSource.fromFloatField("score"));
return fsq;
}
}
}
public class MyCustomQuery extends Query {
#Override
public Weight createWeight(final IndexSearcher searcher, final ScoreMode scoreMode, final float boost) throws IOException {
Weight weight;
if(query == null){
weight = new ConstantScoreWeight(this, boost) {
#Override
public Scorer scorer(final LeafReaderContext context) throws IOException {
return new ConstantScoreScorer(this,score(),scoreMode, DocIdSetIterator.all(context.reader().maxDoc()));
}
#Override
public boolean isCacheable(final LeafReaderContext leafReaderContext) {
return false;
}
};
}else {
weight = searcher.createWeight(query,scoreMode,boost);
}
return weight;
}
}
public class CustomValuesSource extends DoubleValuesSource {
#Override
public DoubleValues getValues(final LeafReaderContext leafReaderContext,final DoubleValues doubleValues) throws IOException {
final DoubleValues dv = new CustomDoubleValues(leafReaderContext);
return dv;
}
class CustomDoubleValues extends DoubleValues {
#Override
public boolean advanceExact(final int doc) throws IOException {
final Document document = leafReaderContext.reader().document(doc);
final List<IndexableField> fields = document.getFields();
for (final IndexableField field : fields) {
// total_score is being calculated with my own preferences
document.add(new FloatDocValuesField("score",total_score));
//can we include the **score** here?
this custom logic which includes score is not even calling.
}
}
}
I am trying for a long time but have not found a single working example. Can anybody help me and save me here.
Thank you,
Syamala.

How to do failure tolerance for Flink to sink data to hdfs as gzip compression?

We want to write compressed data to HDFS by Flink's BucketingSink or StreamingFileSink. I have write my own Writer which works fine if no failure occurs. However when It encounters a failure and restart from checkpoint, It will generate valid-length file(hadoop < 2.7) or truncate the file. Unluckily gzips are binary files which have trailer at the end of file. Therefore simple truncation does not work in my case. Any ideas to enable exactly-once semantic for compression hdfs sink?
That's my writer's code:
public class HdfsCompressStringWriter extends StreamWriterBaseV2<JSONObject> {
private static final long serialVersionUID = 2L;
/**
* The {#code CompressFSDataOutputStream} for the current part file.
*/
private transient GZIPOutputStream compressionOutputStream;
public HdfsCompressStringWriter() {}
#Override
public void open(FileSystem fs, Path path) throws IOException {
super.open(fs, path);
this.setSyncOnFlush(true);
compressionOutputStream = new GZIPOutputStream(this.getStream(), true);
}
public void close() throws IOException {
if (compressionOutputStream != null) {
compressionOutputStream.close();
compressionOutputStream = null;
}
resetStream();
}
#Override
public void write(JSONObject element) throws IOException {
if (element == null || !element.containsKey("body")) {
return;
}
String content = element.getString("body") + "\n";
compressionOutputStream.write(content.getBytes());
compressionOutputStream.flush();
}
#Override
public Writer<JSONObject> duplicate() {
return new HdfsCompressStringWriter();
}
}
I would recommend to implement a BulkWriter for the StreamingFileSink which compresses the elements via a GZIPOutputStream. The code could look the following:
public static void main(String[] args) throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(1);
env.enableCheckpointing(1000);
final DataStream<Integer> input = env.addSource(new InfinitySource());
final StreamingFileSink<Integer> streamingFileSink = StreamingFileSink.<Integer>forBulkFormat(new Path("output"), new GzipBulkWriterFactory<>()).build();
input.addSink(streamingFileSink);
env.execute();
}
private static class GzipBulkWriterFactory<T> implements BulkWriter.Factory<T> {
#Override
public BulkWriter<T> create(FSDataOutputStream fsDataOutputStream) throws IOException {
final GZIPOutputStream gzipOutputStream = new GZIPOutputStream(fsDataOutputStream, true);
return new GzipBulkWriter<>(new ObjectOutputStream(gzipOutputStream), gzipOutputStream);
}
}
private static class GzipBulkWriter<T> implements BulkWriter<T> {
private final GZIPOutputStream gzipOutputStream;
private final ObjectOutputStream objectOutputStream;
public GzipBulkWriter(ObjectOutputStream objectOutputStream, GZIPOutputStream gzipOutputStream) {
this.gzipOutputStream = gzipOutputStream;
this.objectOutputStream = objectOutputStream;
}
#Override
public void addElement(T t) throws IOException {
objectOutputStream.writeObject(t);
}
#Override
public void flush() throws IOException {
objectOutputStream.flush();
}
#Override
public void finish() throws IOException {
objectOutputStream.flush();
gzipOutputStream.finish();
}
}

DataStreamer does not work well

I'm using Ignite 2.1.0 and I create a simple program to try DataStreamer, but I offen got error like this:
"[diagnostic]Failed to wait for partition map exchange" or "Attempted to release write lock while not holding it".
I started two local nodes, one was started in windows CMD using example xml configuration and another was started in Eclipse. My code in Eclipse like this :
public class TestDataStreamer {
public static void main(String[] args) {
// TODO Auto-generated method stub
long bgn,end;
IgniteConfiguration cfg = new IgniteConfiguration();
cfg.setPeerClassLoadingEnabled(true);
Ignite ignite = Ignition.start(cfg);
CacheConfiguration<Long, Map> cacheConf = new CacheConfiguration();
cacheConf.setName("TestDataStreamer").setCacheMode(CacheMode.REPLICATED);
cacheConf.setBackups(0);
IgniteCache cache = ignite.getOrCreateCache(cacheConf);
cache.clear();
File dataFile = new File("D:/data/1503307171374.data"); //10,000,000 rows text data
bgn = System.currentTimeMillis();
try {
loadByStreamer(dataFile,ignite,"TestDataStreamer");
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
} finally {
end = System.currentTimeMillis();
System.out.println("---------------");
System.out.println((end-bgn)/1000.0+" s");
}
cache.destroy();
System.out.println("cache destroy...");
ignite.close();
System.out.println("finish");
}
private static void loadByStreamer(File dataFile, Ignite ignite, String cacheName) throws Exception {
IgniteDataStreamer<Long,TestObj> ds = ignite.dataStreamer(cacheName);
//ds.allowOverwrite(true);
ds.autoFlushFrequency(10000);
ds.perNodeBufferSize(4096);
BufferedReader br = new BufferedReader(new InputStreamReader(
new FileInputStream(dataFile),"UTF-8"));
String line = null;
long count = 0;
while((line=br.readLine())!=null){
ds.addData(System.currentTimeMillis(), parseData(line, Constants.DEFAULT_SEPARATOR,
"id,sn,type_code,trade_ts,bill_ts,company_code,company_name,biz_type,charge_amt,pay_mode".split(",")));
if(++count%10000==0){
System.out.println(count+" loaded...");
}
//System.out.println(count+":"+line);
}
System.out.println("flushing...");
ds.flush();
System.out.println("flushed");
br.close();
ds.close();
System.out.println("file handled...");
}
private static TestObj parseData(String data, String saperator, String[] fields){
TestObj obj = new TestObj();
if(data!=null && saperator.trim().length()>0){
String[] values = data.split(saperator);
obj.setId(values[0]);
obj.setSn(values[1]);
obj.setType_code(values[2]);
obj.setTrade_ts(values[3]);
obj.setBill_ts(values[4]);
obj.setCompany_code(values[5]);
obj.setCompany_name(values[6]);
obj.setBiz_type(values[7]);
obj.setCharge_amt(values[8]);
obj.setPay_mode(values[9]);
}
return obj;
}
}
class TestObj {
private String id;
private String sn;
private String type_code;
private String trade_ts;
private String bill_ts;
private String company_code;
private String company_name;
private String biz_type;
private String charge_amt;
private String pay_mode;
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getSn() {
return sn;
}
public void setSn(String sn) {
this.sn = sn;
}
public String getType_code() {
return type_code;
}
public void setType_code(String type_code) {
this.type_code = type_code;
}
public String getTrade_ts() {
return trade_ts;
}
public void setTrade_ts(String trade_ts) {
this.trade_ts = trade_ts;
}
public String getBill_ts() {
return bill_ts;
}
public void setBill_ts(String bill_ts) {
this.bill_ts = bill_ts;
}
public String getCompany_code() {
return company_code;
}
public void setCompany_code(String company_code) {
this.company_code = company_code;
}
public String getCompany_name() {
return company_name;
}
public void setCompany_name(String company_name) {
this.company_name = company_name;
}
public String getBiz_type() {
return biz_type;
}
public void setBiz_type(String biz_type) {
this.biz_type = biz_type;
}
public String getCharge_amt() {
return charge_amt;
}
public void setCharge_amt(String charge_amt) {
this.charge_amt = charge_amt;
}
public String getPay_mode() {
return pay_mode;
}
public void setPay_mode(String pay_mode) {
this.pay_mode = pay_mode;
}
}
If stop the node started in CMD and run the program in only one node, it works well.
Is there anyone can help me?
Update jdk for both nodes to the same version, for example for 1.8.0_144(as you already have it in installed), or at least, try to update idk in eclipse to the latest of 1.7 versions, it should help.
There is a thread on Ignite user list, when guys faced pretty the same exception and updating of java version helped them to fix it.

Wicket serialization issue with WebApplication

I'm continuing on with a logging behavior using the WebSocketBehavior. It currently logs the correct data to a console, but also throws a terrible serialization error. It is because I am providing the WicketApplication itself as a constructor argument for the behavior. I've tried passing it my session object and using that to get the WebApplication, but it consistently returns null. The broadcaster object requires the application in order to function properly. My question is how can I provide the WebApplication to the behavior while avoiding the nasty serialization error?? Here is my behavior class:
public class LogWebSocketBehavior extends WebSocketBehavior implements Serializable {
private static final long serialVersionUID = 1L;
private Console console;
private Handler logHandler;
private Model<LogRecord> model = new Model<>();
private WebApplication application;
public LogWebSocketBehavior(Console console, WebApplication application) {
super();
configureLogger();
this.console = console;
this.application = application;
}
private void configureLogger() {
Enumeration<String> list = LogManager.getLogManager().getLoggerNames();
list.hasMoreElements();
Logger l = Logger.getLogger(AppUtils.loggerName);
l.addHandler(getLoggerHandler());
}
#Override
protected synchronized void onPush(WebSocketRequestHandler handler, IWebSocketPushMessage message) {
LogRecord r = model.getObject();
sendRecordToConsole(handler, r);
}
private Handler getLoggerHandler() {
return new LogHandler() {
private static final long serialVersionUID = 1L;
#Override
public void publish(LogRecord record) {
model.setObject(record);
sendToAllConnectedClients("data");
}
};
}
private synchronized void sendToAllConnectedClients(String message) {
IWebSocketConnectionRegistry registry = new SimpleWebSocketConnectionRegistry();
WebSocketPushBroadcaster b = new WebSocketPushBroadcaster(registry);
b.broadcastAll(application, new Message());
}
private void sendRecordToConsole(WebSocketRequestHandler handler, LogRecord r) {
Level level = r.getLevel();
if (level.equals(Level.INFO)) {
console.info(handler, new SimpleFormatter().formatMessage(r));
} else {
console.error(handler, new SimpleFormatter().formatMessage(r));
}
}
class Message implements IWebSocketPushMessage {
public Message() {
}
}
}
Here is the panel that is being used to display the messages:
public class FooterPanel extends Panel {
private static final long serialVersionUID = 1L;
private Form form;
private Console console;
public FooterPanel(String id) {
super(id);
}
#Override
public void onInitialize() {
super.onInitialize();
form = new Form("form");
form.add(console = getConsole("feedback_console"));
console.setOutputMarkupId(true);
form.setOutputMarkupId(true);
add(form);
add(getLoggingBehavior());
}
private Console getConsole(String id) {
return new Console(id) {
private static final long serialVersionUID = 1L;
};
}
private WebSocketBehavior getLoggingBehavior() {
return new LogWebSocketBehavior(console, this.getWebApplication());
}
}
I updated my behavior as follows:
public class LogWebSocketBehavior extends WebSocketBehavior implements Serializable {
private static final long serialVersionUID = 1L;
private Console console;
private Handler logHandler;
private Model<LogRecord> model = new Model<>();
public LogWebSocketBehavior(Console console) {
super();
configureLogger();
this.console = console;
}
private void configureLogger() {
Enumeration<String> list = LogManager.getLogManager().getLoggerNames();
list.hasMoreElements();
Logger l = Logger.getLogger(AppUtils.loggerName);
l.addHandler(getLoggerHandler());
}
#Override
protected synchronized void onPush(WebSocketRequestHandler handler, IWebSocketPushMessage message) {
LogRecord r = model.getObject();
sendRecordToConsole(handler, r);
}
private Handler getLoggerHandler() {
return new LogHandler() {
private static final long serialVersionUID = 1L;
#Override
public void publish(LogRecord record) {
model.setObject(record);
sendToAllConnectedClients("data");
}
};
}
private synchronized void sendToAllConnectedClients(String message) {
WebApplication application = WebApplication.get();
IWebSocketConnectionRegistry registry = new SimpleWebSocketConnectionRegistry();
WebSocketPushBroadcaster b = new WebSocketPushBroadcaster(registry);
b.broadcastAll(application, new Message());
}
private void sendRecordToConsole(WebSocketRequestHandler handler, LogRecord r) {
Level level = r.getLevel();
String message = AppUtils.consoleDateTimeFormat.format(LocalDateTime.now()) + " - " + AppUtils.LogFormatter.formatMessage(r);
if (level.equals(Level.INFO)) {
console.info(handler, message);
} else {
console.error(handler, message);
}
}
class Message implements IWebSocketPushMessage {
public Message() {
}
}
}
And I'm back to the original issues I started with, which is the following error:
ERROR - ErrorLogger - Job (report.DB5E002E046235586592E7E984338DEE3 : 653 threw an exception.
org.quartz.SchedulerException:
Job threw an unhandled exception. [See nested exception: org.apache.wicket.WicketRuntimeException: There is no application attached to current thread DefaultQuartzScheduler_Worker-1]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573)
Caused by: org.apache.wicket.WicketRuntimeException: There is no application attached to current thread DefaultQuartzScheduler_Worker-1
at org.apache.wicket.Application.get(Application.java:236)
at org.apache.wicket.protocol.http.WebApplication.get(WebApplication.java:160)
at eb.wicket.behaviors.LogWebSocketBehavior.sendToAllConnectedClients(LogWebSocketBehavior.java:77)
at eb.wicket.behaviors.LogWebSocketBehavior.access$100(LogWebSocketBehavior.java:29)
at eb.wicket.behaviors.LogWebSocketBehavior$1.publish(LogWebSocketBehavior.java:70)
Finally working as desired.. Here's the behavior class:
public class LogWebSocketBehavior extends WebSocketBehavior implements Serializable {
private static final long serialVersionUID = 1L;
private Console console;
private Model<LogRecord> model = new Model<>();
public LogWebSocketBehavior(Console console) {
super();
configureLogger();
this.console = console;
}
private void configureLogger() {
Enumeration<String> list = LogManager.getLogManager().getLoggerNames();
list.hasMoreElements();
Logger l = Logger.getLogger(AppUtils.loggerName);
l.addHandler(getLoggerHandler());
}
#Override
protected synchronized void onPush(WebSocketRequestHandler handler, IWebSocketPushMessage message) {
LogRecord r = model.getObject();
sendRecordToConsole(handler, r);
}
private Handler getLoggerHandler() {
return new LogHandler() {
private static final long serialVersionUID = 1L;
#Override
public void publish(LogRecord record) {
model.setObject(record);
sendToAllConnectedClients("data");
}
};
}
private synchronized void sendToAllConnectedClients(String message) {
IWebSocketConnectionRegistry registry = new SimpleWebSocketConnectionRegistry();
WebSocketPushBroadcaster b = new WebSocketPushBroadcaster(registry);
b.broadcastAll(Application.get("eb.wicket.MyWicketFilter"), new Message());
}
private void sendRecordToConsole(WebSocketRequestHandler handler, LogRecord r) {
Level level = r.getLevel();
String message = AppUtils.consoleDateTimeFormat.format(LocalDateTime.now()) + " - " + AppUtils.LogFormatter.formatMessage(r);
if (level.equals(Level.INFO)) {
console.info(handler, message);
} else {
console.error(handler, message);
}
}
class Message implements IWebSocketPushMessage {
public Message() {
}
}
}
Instead of keeping a reference to the Application just look it up when needed: Application.get().
After updating your question we can see:
Caused by: org.apache.wicket.WicketRuntimeException:
There is no application attached to current thread DefaultQuartzScheduler_Worker-1
This explains it - this is a thread started by Quartz, it is not a http thread.
The only way to overcome this is to use Application.get(String). The value should be the application name (Application#getName()) that is specified as a value for <filter-name> in your web.xml.
This way you can get the Application instance, but there is no way to do the same for Session and/or RequestCycle in case you need them too.

How to retrieve the Date part out of a Datetime result column in SQLite?

I have a column of datetime type from which I would like to retrieve only the date. Is there anyway to do this?
Previously it was an epoch value where I convert it to datetime .
Here is a sample result :
smbd|ip address|1082|ip address|"2011-04-26 18:40:34"
I have tried the following commands, but it yields negative / zero results
SELECT DATE(datetime) from attacked_total;
SELECT STRFTIME('%Y-%m-%d', datetime) FROM attacked_total;
SELECT DATETIME('%Y-%m-%d', datetime) FROM attacked_total;
SELECT DATE('%Y-%m-%d', datetime) FROM attacked_total;
You can use the DATE function.
Example
> select date('2011-04-26 18:40:34')
> 2011-04-26
You can get only the day with strftime,
> select strftime('%d', '2011-04-26 18:40:34')
> 26
This works.
SELECT strftime('%d', '2011-04-26')
You can get year, month, day and everything else that is a part of date.
But you must be carefull how you enter those strings.
The format of date is YYYY-MM-DD, so you must enter the string in that format.
It's also case-sensitive. If you want a month, then you should use %m, because if you use %M, it will try to get a minutes, and if you don't have time part in your date, it will throw an error.
For additional information check SQLite official site.
Try to use this functions if you are using SQLite
strftime('%Y', current_date)
strftime('%m', current_date)
strftime('%d', current_date)
strftime('%H', current_date)
strftime('%M', current_date)
strftime('%S', current_date)
In case you are using Hibernate try to register this functions in the SQLiteDialect class
registerFunction("year", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime('%Y', ?1))"));
registerFunction("month", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%m', ?1))"));
registerFunction("day", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%d', ?1))"));
registerFunction("hour", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%H', ?1))"));
registerFunction("minute", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%M', ?1))"));
registerFunction("second", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%S', ?1))"));
I leave my SQLiteDialect class if you want to make any suggestions
import java.sql.SQLException;
import java.sql.Types;
import org.hibernate.JDBCException;
import org.hibernate.ScrollMode;
import org.hibernate.dialect.Dialect;
import org.hibernate.dialect.function.AbstractAnsiTrimEmulationFunction;
import org.hibernate.dialect.function.NoArgSQLFunction;
import org.hibernate.dialect.function.SQLFunction;
import org.hibernate.dialect.function.SQLFunctionTemplate;
import org.hibernate.dialect.function.StandardSQLFunction;
import org.hibernate.dialect.function.VarArgsSQLFunction;
import org.hibernate.dialect.identity.IdentityColumnSupport;
import utiles.SQLiteDialectIdentityColumnSupport;
import org.hibernate.dialect.pagination.AbstractLimitHandler;
import org.hibernate.dialect.pagination.LimitHandler;
import org.hibernate.dialect.pagination.LimitHelper;
import org.hibernate.dialect.unique.DefaultUniqueDelegate;
import org.hibernate.dialect.unique.UniqueDelegate;
import org.hibernate.engine.spi.RowSelection;
import org.hibernate.exception.DataException;
import org.hibernate.exception.JDBCConnectionException;
import org.hibernate.exception.LockAcquisitionException;
import org.hibernate.exception.spi.SQLExceptionConversionDelegate;
import org.hibernate.exception.spi.TemplatedViolatedConstraintNameExtracter;
import org.hibernate.exception.spi.ViolatedConstraintNameExtracter;
import org.hibernate.internal.util.JdbcExceptionHelper;
import org.hibernate.mapping.Column;
import org.hibernate.type.StandardBasicTypes;
/**
* An SQL dialect for SQLite 3.
*/
public class SQLiteDialect5 extends Dialect {
private final UniqueDelegate uniqueDelegate;
public SQLiteDialect5() {
registerColumnType(Types.BIT, "boolean");
//registerColumnType(Types.FLOAT, "float");
//registerColumnType(Types.DOUBLE, "double");
registerColumnType(Types.DECIMAL, "decimal");
registerColumnType(Types.CHAR, "char");
registerColumnType(Types.LONGVARCHAR, "longvarchar");
registerColumnType(Types.TIMESTAMP, "datetime");
registerColumnType(Types.BINARY, "blob");
registerColumnType(Types.VARBINARY, "blob");
registerColumnType(Types.LONGVARBINARY, "blob");
registerFunction("concat", new VarArgsSQLFunction(StandardBasicTypes.STRING, "", "||", ""));
registerFunction("mod", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "?1 % ?2"));
registerFunction("quote", new StandardSQLFunction("quote", StandardBasicTypes.STRING));
registerFunction("random", new NoArgSQLFunction("random", StandardBasicTypes.INTEGER));
registerFunction("round", new StandardSQLFunction("round"));
registerFunction("substr", new StandardSQLFunction("substr", StandardBasicTypes.STRING));
registerFunction("year", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime('%Y', ?1))"));
registerFunction("month", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%m', ?1))"));
registerFunction("day", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%d', ?1))"));
registerFunction("hour", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%H', ?1))"));
registerFunction("minute", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%M', ?1))"));
registerFunction("second", new SQLFunctionTemplate(StandardBasicTypes.INTEGER, "abs(strftime(strftime('%S', ?1))"));
registerFunction("trim", new AbstractAnsiTrimEmulationFunction() {
#Override
protected SQLFunction resolveBothSpaceTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "trim(?1)");
}
#Override
protected SQLFunction resolveBothSpaceTrimFromFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "trim(?2)");
}
#Override
protected SQLFunction resolveLeadingSpaceTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "ltrim(?1)");
}
#Override
protected SQLFunction resolveTrailingSpaceTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "rtrim(?1)");
}
#Override
protected SQLFunction resolveBothTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "trim(?1, ?2)");
}
#Override
protected SQLFunction resolveLeadingTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "ltrim(?1, ?2)");
}
#Override
protected SQLFunction resolveTrailingTrimFunction() {
return new SQLFunctionTemplate(StandardBasicTypes.STRING, "rtrim(?1, ?2)");
}
});
uniqueDelegate = new SQLiteUniqueDelegate(this);
}
// database type mapping support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Override
public String getCastTypeName(int code) {
// FIXME
return super.getCastTypeName(code);
}
// IDENTITY support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
private static final SQLiteDialectIdentityColumnSupport IDENTITY_COLUMN_SUPPORT = new SQLiteDialectIdentityColumnSupport();
#Override
public IdentityColumnSupport getIdentityColumnSupport() {
return IDENTITY_COLUMN_SUPPORT;
}
// limit/offset support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
private static final AbstractLimitHandler LIMIT_HANDLER = new AbstractLimitHandler() {
#Override
public String processSql(String sql, RowSelection selection) {
final boolean hasOffset = LimitHelper.hasFirstRow(selection);
return sql + (hasOffset ? " limit ? offset ?" : " limit ?");
}
#Override
public boolean supportsLimit() {
return true;
}
#Override
public boolean bindLimitParametersInReverseOrder() {
return true;
}
};
#Override
public LimitHandler getLimitHandler() {
return LIMIT_HANDLER;
}
// lock acquisition support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Override
public boolean supportsLockTimeouts() {
// may be http://sqlite.org/c3ref/db_mutex.html ?
return false;
}
#Override
public String getForUpdateString() {
return "";
}
#Override
public boolean supportsOuterJoinForUpdate() {
return false;
}
/*
#Override
public boolean dropTemporaryTableAfterUse() {
return true; // temporary tables are only dropped when the connection is closed. If the connection is pooled...
}
*/
// current timestamp support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Override
public boolean supportsCurrentTimestampSelection() {
return true;
}
#Override
public boolean isCurrentTimestampSelectStringCallable() {
return false;
}
#Override
public String getCurrentTimestampSelectString() {
return "select current_timestamp";
}
// SQLException support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
private static final int SQLITE_BUSY = 5;
private static final int SQLITE_LOCKED = 6;
private static final int SQLITE_IOERR = 10;
private static final int SQLITE_CORRUPT = 11;
private static final int SQLITE_NOTFOUND = 12;
private static final int SQLITE_FULL = 13;
private static final int SQLITE_CANTOPEN = 14;
private static final int SQLITE_PROTOCOL = 15;
private static final int SQLITE_TOOBIG = 18;
private static final int SQLITE_CONSTRAINT = 19;
private static final int SQLITE_MISMATCH = 20;
private static final int SQLITE_NOTADB = 26;
#Override
public SQLExceptionConversionDelegate buildSQLExceptionConversionDelegate() {
return new SQLExceptionConversionDelegate() {
#Override
public JDBCException convert(SQLException sqlException, String message, String sql) {
final int errorCode = JdbcExceptionHelper.extractErrorCode(sqlException);
if (errorCode == SQLITE_TOOBIG || errorCode == SQLITE_MISMATCH) {
return new DataException(message, sqlException, sql);
} else if (errorCode == SQLITE_BUSY || errorCode == SQLITE_LOCKED) {
return new LockAcquisitionException(message, sqlException, sql);
} else if ((errorCode >= SQLITE_IOERR && errorCode <= SQLITE_PROTOCOL) || errorCode == SQLITE_NOTADB) {
return new JDBCConnectionException(message, sqlException, sql);
}
// returning null allows other delegates to operate
return null;
}
};
}
#Override
public ViolatedConstraintNameExtracter getViolatedConstraintNameExtracter() {
return EXTRACTER;
}
private static final ViolatedConstraintNameExtracter EXTRACTER = new TemplatedViolatedConstraintNameExtracter() {
#Override
protected String doExtractConstraintName(SQLException sqle) throws NumberFormatException {
final int errorCode = JdbcExceptionHelper.extractErrorCode(sqle);
if (errorCode == SQLITE_CONSTRAINT) {
return extractUsingTemplate("constraint ", " failed", sqle.getMessage());
}
return null;
}
};
// union subclass support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Override
public boolean supportsUnionAll() {
return true;
}
// DDL support ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#Override
public boolean canCreateSchema() {
return false;
}
#Override
public boolean hasAlterTable() {
// As specified in NHibernate dialect
return false;
}
#Override
public boolean dropConstraints() {
return false;
}
#Override
public boolean qualifyIndexName() {
return false;
}
#Override
public String getAddColumnString() {
return "add column";
}
#Override
public String getDropForeignKeyString() {
throw new UnsupportedOperationException("No drop foreign key syntax supported by SQLiteDialect");
}
#Override
public String getAddForeignKeyConstraintString(String constraintName,
String[] foreignKey, String referencedTable, String[] primaryKey,
boolean referencesPrimaryKey) {
throw new UnsupportedOperationException("No add foreign key syntax supported by SQLiteDialect");
}
#Override
public String getAddPrimaryKeyConstraintString(String constraintName) {
throw new UnsupportedOperationException("No add primary key syntax supported by SQLiteDialect");
}
#Override
public boolean supportsCommentOn() {
return true;
}
#Override
public boolean supportsIfExistsBeforeTableName() {
return true;
}
/* not case insensitive for unicode characters by default (ICU extension needed)
public boolean supportsCaseInsensitiveLike() {
return true;
}
*/
#Override
public boolean doesReadCommittedCauseWritersToBlockReaders() {
// TODO Validate (WAL mode...)
return true;
}
#Override
public boolean doesRepeatableReadCauseReadersToBlockWriters() {
return true;
}
#Override
public boolean supportsTupleDistinctCounts() {
return false;
}
#Override
public int getInExpressionCountLimit() {
// Compile/runtime time option: http://sqlite.org/limits.html#max_variable_number
return 1000;
}
#Override
public UniqueDelegate getUniqueDelegate() {
return uniqueDelegate;
}
private static class SQLiteUniqueDelegate extends DefaultUniqueDelegate {
public SQLiteUniqueDelegate(Dialect dialect) {
super(dialect);
}
#Override
public String getColumnDefinitionUniquenessFragment(Column column) {
return " unique";
}
}
#Override
public String getSelectGUIDString() {
return "select hex(randomblob(16))";
}
#Override
public ScrollMode defaultScrollMode() {
return ScrollMode.FORWARD_ONLY;
}
}
This is to Hibernate 5.1
Try looking up usage of the "datepart" function in SQL.
Something like this should work:
SELECT datewithouttime as datepart(dd,datefield)+'/'+datepart(mm,datefield)+"/"+datepart(yyyy,datefield) FROM tableName