My Use-cases:
We have an installation schedule entity (check below code) and it has an installation date.
Once installation has completed, after 4 weekdays we will verify the installation status with customers.
Note: (4 weekdays - this count is configurable. So 'X' weekdays)
Weekdays means - Monday to Friday. We don't care about other holidays.
I have a scheduler, it will retrieve these orders after 'X' weekdays - I'm stuck here
I don't know how to make a query for after 'X' weekdays.
My code:
#Entity
#Table(schema = "myschema", name = "installation_dates")
#Getter
#Setter
#NoArgsConstructor
public class InstallDates extends TransEntity implements Serializable {
// other columns
#Column(name = "installation_schedule_datetime")
private LocalDateTime installationScheduleDatetime;//I use this column for calculation
#Formula("getWeekDaysCount(installationScheduleDatetime)")
private int weekDaysCount;
public int getWeekDaysCount(LocalDateTime installationScheduleDatetime) {
int totalWeekDays = 0;
LocalDateTime todayDate = LocalDateTime.now();
while (!installationScheduleDatetime.isAfter(todayDate)) {
switch (installationScheduleDatetime.getDayOfWeek()) {
case FRIDAY:
case SATURDAY:
break;
default:
totalWeekDays++;
break;
}
installationScheduleDatetime = installationScheduleDatetime.plusDays(1);
}
return totalWeekDays;
}
}
Question:
How to make a SQL or JPQL or JPA query for weekdays?
I knew its very basic question, I am a mobile app developer, I recently joined the Springboard team, it's really hard for me :(
Feel free to give your valuable feedback!
I have a following suggestion if I correctly got the problem.
Java:
Take the current date
Find the date of interest: count minus 4 workdays (so if it is Friday today - subtract 4 days, if it is Monday - subtract 2 days for weekend and 4 more days for weekdays)
Then write a query that will select all installations that were done on the date of interest.
In pseudo code:
select * from installations where installation_date = <date of interest>;.
Date of interest Java code:
public LocalDateTime getDateOfInterest(int workdays) {
LocalDateTime currentDate = LocalDateTime.now();
if (workdays < 1) {
return currentDate;
}
//it will subtract 'X' working days from current date
LocalDateTime result = currentDate;
int addedDays = 0;
while (addedDays < workdays) {
result = result.minusDays(1);
if (!(result.getDayOfWeek() == DayOfWeek.FRIDAY ||
result.getDayOfWeek() == DayOfWeek.SATURDAY)) {
++addedDays;
}
}
return result;
}
First lets answer your questions
No you cannot call a method from #Formula
You probably could (see here but that might depend on your database.
The fact that you use an entity and JPA doesn't mean everything has to be a JPA property.
You could:
Write a get method that calculates it on the fly
Write a getter which sets it lazily.
Use the #PostLoad to always set it.
#Entity
#Table(schema = "myschema", name = "installation_dates")
#Getter
#Setter
#NoArgsConstructor
public class InstallDates extends TransEntity implements Serializable {
// other columns
#Column(name = "installation_schedule_datetime")
private LocalDateTime installationScheduleDatetime;//I use this column for calculation
public int getWeekDaysCount() {
int totalWeekDays = 0;
LocalDateTime isdt = this.installationScheduleDatetime;
LocalDateTime todayDate = LocalDateTime.now();
while (!isdt.isAfter(todayDate)) {
switch (isdt.getDayOfWeek()) {
case FRIDAY:
case SATURDAY:
break;
default:
totalWeekDays++;
break;
}
isdt = isdt.plusDays(1);
}
return totalWeekDays;
}
}
Or if you really want it to be a property, you could use the getter to set it lazily.
#Entity
#Table(schema = "myschema", name = "installation_dates")
#Getter
#Setter
#NoArgsConstructor
public class InstallDates extends TransEntity implements Serializable {
// other columns
#Column(name = "installation_schedule_datetime")
private LocalDateTime installationScheduleDatetime;//I use this column for calculation
private int weekDaysCount = -1;
public int getWeekDaysCount() {
if (weekDaysCount == -1) {
int totalWeekDays = 0;
LocalDateTime isdt = this.installationScheduleDatetime;
LocalDateTime todayDate = LocalDateTime.now();
while (!isdt.isAfter(todayDate)) {
switch (isdt.getDayOfWeek()) {
case FRIDAY:
case SATURDAY:
break;
default:
totalWeekDays++;
break;
}
isdt = isdt.plusDays(1);
}
weekDaysCount = totalWeekDays;
}
return weekDaysCount;
}
}
Or if you always want to calculate that value you could even place it in an #PostLoad annotation on a method to initialize it (you could even reuse the above lazy getter for it). Or move the init code to the #PostLoad annotated method.
#PostLoad
private void initValues() {
getWeekDaysCount();
}
#Formula specifies an expression written in native SQL that is used to read the value of an attribute instead of storing the value in a Column. (https://docs.jboss.org/hibernate/orm/current/javadocs/org/hibernate/annotations/Formula.html)
As for your case, it doesn't look like there's much use in storing weekDaysCount in the DB if it's derived from installationScheduleDatetime. I'd just mark the weekDaysCount as #Transient and be done with it (#Formula should be removed).
Another solution would be to leave weekDaysCount non-transient and put your calculations in a #PreUpdate/#PrePersist method. See https://www.baeldung.com/jpa-entity-lifecycle-events for more info on that.
Related
My LocalDateTime variable changes by 5 hours every time I get it by querying my sql table. If I have a time that starts at 9:30am then it'll return 4:30am as the time.
Whenever I try to get a datetime type field from my sql table by querying it in my repo
#Query(value = "SELECT start FROM course_section WHERE crn = ?1", nativeQuery = true)
LocalDateTime getStartTimeByCrn(int term);
#Query(value = "SELECT end FROM course_section WHERE crn = ?1", nativeQuery = true)
LocalDateTime getEndTimeByCrn(int term);
I send it to my service
//Method to get the Start time for Section class
public LocalDateTime getStartTimeByCrn(int term) {
LocalDateTime start = (LocalDateTime) repo.getStartTimeByCrn(term);
return start;
}
//Method to get the End time for the Section class
public LocalDateTime getEndTimeByCrn(int term) {
LocalDateTime end = (LocalDateTime) repo.getEndTimeByCrn(term);
return end;
and my controller uses the methods
#RequestMapping(value = "/accepted", method = RequestMethod.POST)
public String addSchedule(#ModelAttribute("schedule") Schedule schedule) throws JsonProcessingException {
int crn = schedule.getCrn();
Section sec = new Section();
//set all the attributes of Section object
sec.setStart(service.getStartTimeByCrn(crn));
sec.setEnd(service.getEndTimeByCrn(crn));
// String stringToJson = new JsonMapper().writeValueAsString(sec);
service.save(sec);
System.out.print( sec.getStart() + " is the subject!!!");
return "calendar";
}
My datetime variable in a row will change by five hours when I get it by the crn from my sql table. Nothing else is changed, not the year, month, day, min, or sec.
My #Entity LocalDateTime variables look like
#JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss")
private LocalDateTime start;
#JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MM-dd'T'HH:mm:ss")
private LocalDateTime end;
There seems like there is no logical reason for this bug to happen...
got crash when the ids is > 999
android.database.sqlite.SQLiteException: too many SQL variables (code 1): ,
while compiling: delete from data where ids in (?,?,...)
saw this seems there is max limit to 999.
how to delete more than 1000 with Room?
Probably you have a list of ids to delete. Open a transaction, split the list in sublist and execute the SQL delete operation once for sublist.
For more information about Room the official documentation about Transactions with Room.
I didn't test the following code, but I think that it accomplishes your need.
#Dao
public interface DataDao {
#Delete("delete from data where ids in :filterValues")
long delete(List<String> filterValues)
#Transaction
public void deleteData(List<Data> dataToDelete) {
// split the array in list of 100 elements (change as you prefer but < 999)
List<List<Data>> subLists=DeleteHelper.chopped(dataToDelete, 100);
List<String> ids=new ArrayList<>();
for (List<Data> list: subList) {
list.clear();
for (Data item: list) {
ids.add(item.getId());
}
delete(ids);
}
}
}
public abstract class DeleteHelper {
// chops a list into non-view sublists of length L
public static <T> List<List<T>> chopped(List<T> list, final int L) {
List<List<T>> parts = new ArrayList<List<T>>();
final int N = list.size();
for (int i = 0; i < N; i += L) {
parts.add(new ArrayList<T>(
list.subList(i, Math.min(N, i + L)))
);
}
return parts;
}
}
I hope this help.
I think there are two ways to solve it.
First, chop chop your list and runs multiple times with delete method. (just like #xcesco answered)
Second, you can write very long query and run it with #RawQuery.
#RawQuery
abstract int simpleRawQuery(SupportSQLiteQuery sqliteQuery)
#Transaction
public int deleteData(List<Long> pkList) {
SimpleSQLiteQuery query = new SimpleSQLiteQuery("DELETE FROM tb WHERE _id IN (" + StringUtils.join(pkList,",") + ")";
return simpleRawQuery(query)
}
to get familiar with optaplanner i created a simple test project. I only have one Solution and one Entity class. The Entity has only one value between 0 and 9. There should only be odd numbers and the sum of all should be less then 10 (this are just some random constraints i came up with).
As Score i use a simple HardSoftScore. Here is the code:
public class TestScoreCalculator implements EasyScoreCalculator<TestSolution>{
#Override
public HardSoftScore calculateScore(TestSolution sol) {
int hardScore = 0;
int softScore = 0;
int valueSum = 0;
for (TestEntity entity : sol.getTestEntities()) {
valueSum += entity.getValue() == null? 0 : entity.getValue();
}
// hard Score
for (TestEntity entity : sol.getTestEntities()) {
if(entity.getValue() == null || entity.getValue() % 2 == 0)
hardScore -= 1; // constraint: only odd numbers
}
if(valueSum > 10)
hardScore -= 2; // constraint: sum should be less than 11
// soft Score
softScore = valueSum; // maximize
return HardSoftScore.valueOf(hardScore, softScore);
}
}
and this is my config file:
<?xml version="1.0" encoding="UTF-8"?>
<solver>
<!-- Domain model configuration -->
<scanAnnotatedClasses/>
<!-- Score configuration -->
<scoreDirectorFactory>
<easyScoreCalculatorClass>score.TestScoreCalculator</easyScoreCalculatorClass>
</scoreDirectorFactory>
<!-- Optimization algorithms configuration -->
<termination>
<secondsSpentLimit>30</secondsSpentLimit>
</termination>
</solver>
for some reason OptaPlanner cant find a feasible solution. It terminates with LS step (161217), time spent (29910), score (-2hard/10soft), best score (-2hard/10soft)... and the solution 9 1 0 0.
So the hardScore is -2 because the two 0 are not odd. A possible solution would be 7 1 1 1 for example. Why is this ? This should be a really easy example ...
(when i set the Start values to 7 1 1 1 it terminates with this solution and a score of (0hard/10soft) how it should be)
Edit:
The Entity class
#PlanningEntity
public class TestEntity {
private Integer value;
#PlanningVariable(valueRangeProviderRefs = {"TestEntityValueRange"})
public Integer getValue() {
return value;
}
public void setValue(Integer value) {
this.value = value;
}
#ValueRangeProvider(id = "TestEntityValueRange")
public CountableValueRange<Integer> getStartPeriodRange() {
return ValueRangeFactory.createIntValueRange(0, 10);
}
}
The Solution class
#PlanningSolution
public class TestSolution {
private List<TestEntity> TestEntities;
private HardSoftScore score;
#PlanningEntityCollectionProperty
public List<TestEntity> getTestEntities() {
return TestEntities;
}
public void setTestEntities(List<TestEntity> testEntities) {
TestEntities = testEntities;
}
#PlanningScore
public HardSoftScore getScore() {
return score;
}
public void setScore(HardSoftScore score) {
this.score = score;
}
#Override
public String toString() {
String str = "";
for (TestEntity testEntity : TestEntities)
str += testEntity.getValue()+" ";
return str;
}
}
The Main Program class
public class Main {
public static final String SOLVER_CONFIG = "score/TestConfig.xml";
public static int printCount = 0;
public static void main(String[] args) {
init();
}
private static void init() {
SolverFactory<TestSolution> solverFactory = SolverFactory.createFromXmlResource(SOLVER_CONFIG);
Solver<TestSolution> solver = solverFactory.buildSolver();
TestSolution model = new TestSolution();
List<TestEntity> list = new ArrayList<TestEntity>();
// list.add(new TestEntity(){{setValue(7);}});
// list.add(new TestEntity(){{setValue(1);}});
// list.add(new TestEntity(){{setValue(1);}});
// list.add(new TestEntity(){{setValue(1);}});
for (int i = 0; i < 4; i++) {
list.add(new TestEntity());
}
model.setTestEntities(list);
// Solve the problem
TestSolution solution = solver.solve(model);
// Display the result
System.out.println(solution);
}
}
It gets stuck in a local optima because there is no move that takes 1 from entity and gives it to another entity. With a custom move you can add that.
These kind of moves only apply to numeric value ranges (which are rare, usually value ranges are a list of employees etc), but they should probably exist out of the box (feel free to create a jira for them).
Anyway, another way to get the good solution is to add <exhaustiveSearch/>, that bypassing local search and therefore the local optima. But that doesn't scale well.
I have two timestamps as input. I want to calculate the time difference in hours between those timestamps excluding Sundays.
I can get the number of days using datediff function in hive.
I can get the day of a particular date using from_unixtime(unix_timestamp(startdate), 'EEEE').
But I dont know how to relate those functions to achieve my requirement or is there any other easy way to achieve this.
Thanks in Advance.
You can write one custom UDF which takes two columns containing the dates as inputs and counts the difference between the dates excluding sundays.
import java.text.ParseException;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.List;
import java.util.Date;
import org.apache.hadoop.hive.ql.exec.UDF;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
public class IsoYearWeek extends UDF {
public LongWritable evaluate(Text dateString,Text dateString1) throws ParseException { //takes the two columns as inputs
SimpleDateFormat date = new SimpleDateFormat("dd/MM/yyyy");
/* String date1 = "20/07/2016";
String date2 = "28/07/2016";
*/ int count=0;
List<Date> dates = new ArrayList<Date>();
Date startDate = (Date)date.parse(dateString.toString());
Date endDate = (Date)date.parse(dateString1.toString());
long interval = 24*1000 * 60 * 60; // 1 hour in millis
long endTime =endDate.getTime() ; // create your endtime here, possibly using Calendar or Date
long curTime = startDate.getTime();
while (curTime <= endTime) {
dates.add(new Date(curTime));
curTime += interval;
}
for(int i=0;i<dates.size();i++){
Date lDate =(Date)dates.get(i);
if(lDate.getDay()==0){
count+=1; //counts the number of sundays in between
}
}
long days_diff = (endDate.getTime()-startDate.getTime())/(24 * 60 * 60 * 1000)-count; //displays the days difference excluding sundays
return new LongWritable(days_diff);
}
}
Use spark so that It will be more easy to implement and maintain
import org.joda.time.format.DateTimeFormat
def dayDiffWithExcludeWeekendAndHoliday(startDate:String,endDate:String,holidayExclusion:Seq[String]) ={
#transient val datePattern="yyyy-MM-dd"
#transient val dateformatter=DateTimeFormat.forPattern(datePattern)
var numWeekDaysValid=0
var numWeekends=0
var numWeekDaysInValid=0
val holidayExclusionJoda=holidayExclusion.map(dateformatter.parseDateTime(_))
val startDateJoda=dateformatter.parseDateTime(startDate)
var startDateJodaLatest=dateformatter.parseDateTime(startDate)
val endDateJoda=dateformatter.parseDateTime(endDate)
while (startDateJodaLatest.compareTo(endDateJoda) !=0)
{
startDateJodaLatest.getDayOfWeek match {
case value if value >5 => numWeekends=numWeekends+1
case value if value <= 5 => holidayExclusionJoda.contains(startDateJodaLatest) match {case value if value == true => numWeekDaysInValid=numWeekDaysInValid+1 case value if value == false => numWeekDaysValid=numWeekDaysValid+1 }
}
startDateJodaLatest = startDateJodaLatest.plusDays(1)
}
Array(numWeekDaysValid,numWeekends,numWeekDaysInValid)
}
spark.udf.register("dayDiffWithExcludeWeekendAndHoliday",dayDiffWithExcludeWeekendAndHoliday(_:String,_:String,_:Seq[String]))
case class tmpDateInfo(startDate:String,endDate:String,holidayExclusion:Array[String])
case class tmpDateInfoFull(startDate:String,endDate:String,holidayExclusion:Array[String],numWeekDaysValid:Int,numWeekends:Int,numWeekDaysInValid:Int)
def dayDiffWithExcludeWeekendAndHolidayCase(tmpInfo:tmpDateInfo) ={
#transient val datePattern="yyyy-MM-dd"
#transient val dateformatter=DateTimeFormat.forPattern(datePattern)
var numWeekDaysValid=0
var numWeekends=0
var numWeekDaysInValid=0
val holidayExclusionJoda=tmpInfo.holidayExclusion.map(dateformatter.parseDateTime(_))
val startDateJoda=dateformatter.parseDateTime(tmpInfo.startDate)
var startDateJodaLatest=dateformatter.parseDateTime(tmpInfo.startDate)
val endDateJoda=dateformatter.parseDateTime(tmpInfo.endDate)
while (startDateJodaLatest.compareTo(endDateJoda) !=0)
{
startDateJodaLatest.getDayOfWeek match {
case value if value >5 => numWeekends=numWeekends+1
case value if value <= 5 => holidayExclusionJoda.contains(startDateJodaLatest) match {case value if value == true => numWeekDaysInValid=numWeekDaysInValid+1 case value if value == false => numWeekDaysValid=numWeekDaysValid+1 }
}
startDateJodaLatest = startDateJodaLatest.plusDays(1)
}
tmpDateInfoFull(tmpInfo.startDate,tmpInfo.endDate,tmpInfo.holidayExclusion,numWeekDaysValid,numWeekends,numWeekDaysInValid)
}
//df way 1
val tmpDF=Seq(("2020-05-03","2020-06-08",List("2020-05-08","2020-06-05"))).toDF("startDate","endDate","holidayExclusion").select(col("startDate").cast(StringType),col("endDate").cast(StringType),col("holidayExclusion"))
tmpDF.as[tmpDateInfo].map(dayDiffWithExcludeWeekendAndHolidayCase).show(false)
//df way 2
tmpDF.selectExpr("*","dayDiffWithExcludeWeekendAndHoliday(cast(startDate as string),cast(endDate as string),cast(holidayExclusion as array<string>)) as resultDays").selectExpr("startDate","endDate","holidayExclusion","resultDays[0] as numWeekDaysValid","resultDays[1] as numWeekends","resultDays[2] as numWeekDaysInValid").show(false)
tmpDF.selectExpr("*","dayDiffWithExcludeWeekendAndHoliday(cast(startDate as string),cast(endDate as string),cast(holidayExclusion as array<string>)) as resultDays").selectExpr("startDate","endDate","holidayExclusion","resultDays[0] as numWeekDaysValid","resultDays[1] as numWeekends","resultDays[2] as numWeekDaysInValid").show(false)
// spark sql way, works with hive table when configured in hive metastore
tmpDF.createOrReplaceTempView("tmpTable")
spark.sql("select startDate,endDate,holidayExclusion,dayDiffWithExcludeWeekendAndHoliday(startDate,endDate,holidayExclusion) from tmpTable").show(false)
spring data jpa 1.4.3 with Oracle 11g.
I have an entity like this:
class LinkRecord {
String value;
int linkType;
...
}
I am using (value, linkType) as a composite index.
For a given list of (v, t) tuples, we need to select all the records in the DB so that value = v, linkType = t.
Basically, I want to build this query:
SELECT * FROM LINK_RECORD WHERE (VALUE, LINK_TYPE) IN (('value1', 0), ('value2', 25), ...)
where the list in the IN clause is passed in as a param.
Since we're working with a large volume of data, it would be very undesirable to query for the tuples one by one.
In my repository I've tried this:
#Query("select r from LinkRecord r where (r.value, r.linkType) in :keys")
List<LinkRecord> findByValueAndType(#Param("keys")List<List<Object>> keys);
where keys is a list of (lists of length 2). This gets me ORA_00920: invalid relational operator.
Is there any way to make this work using a named query? Or do I have to resort to native sql?
The answer is too late, but maybe some1 else has the same problem. This is one of my working examples. Here I need to search for all entries that match a given composite key:
The entity....
#Entity
#NamedQueries({
#NamedQuery(name = "Article.findByIdAndAccessId", query = "SELECT a FROM Article a WHERE a.articlePk IN (:articlePks) ORDER BY a.articlePk.article")
})
#Table(name = "ARTICLE")
public class Article implements Serializable
{
private static final long serialVersionUID = 1L;
#EmbeddedId
private ArticlePk articlePk = new ArticlePk();
#Column(name = "art_amount")
private Float amount;
#Column(name = "art_unit")
private String unit;
public Article()
{
}
//more code
}
The PK class....
#Embeddable
public class ArticlePk implements Serializable
{
private static final long serialVersionUID = 1L;
#Column(name = "art_article")
private String article;
#Column(name = "art_acc_identifier")
private Long identifier;
public ArticlePk()
{
}
public ArticlePk(String article, Long identifier)
{
this.article = article;
this.identifier = identifier;
}
#Override
public boolean equals(Object other)
{
if (this == other)
{
return true;
}
if (!(other instanceof ArticlePk))
{
return false;
}
ArticlePk castOther = (ArticlePk)other;
return this.article.equals(castOther.article) && this.identifier.equals(castOther.identifier);
}
#Override
public int hashCode()
{
final int prime = 31;
int hash = 17;
hash = hash * prime + this.article.hashCode();
hash = hash * prime + this.identifier.hashCode();
return hash;
}
//more code
}
Invocation by....
TypedQuery<Article> queryArticle = entityManager.createNamedQuery("Article.findByIdAndAccessId", Article.class);
queryArticle.setParameter("articlePks", articlePks);
List<Article> articles = queryArticle.getResultList();
where....
articlePks is List<ArticlePk>.