Substring search a numeric field with JPA/Hibernate - sql

I have a JPA entity that has a numeric field. Something like:
#Basic(optional = false)
#Column(name = "FISCAL_YEAR", nullable = false)
private int fiscalYear;
I have a requirement to sub-string search this field. For example, I want a search for 17 to give me 2017 and 1917 and 1789. Forget for a minute what a crazy request this is and assume I have a real use case that makes sense. Changing the column to a varchar in the database is not an option.
In PL/SQL, I'd covert the field to a varchar and do a like '%17%'. How would I accomplish this with Hibernate/JPA without using a native query? I need to be able to use HQL or Criteria to do the same thing.

Achieving like on numeric values using criteria builders
Table
Employee | CREATE TABLE `Employee` (
`id` int(11) NOT NULL,
`first` varchar(255) DEFAULT NULL,
`last` varchar(255) DEFAULT NULL,
`occupation` varchar(255) DEFAULT NULL,
`year` int(11) DEFAULT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Entity
private Integer year;
public Integer getYear() {
return year;
}
public void setYear(Integer year) {
this.year = year;
}
Data in the table
+----+-------+------+------------+------+
| id | first | last | occupation | year |
+----+-------+------+------------+------+
| 2 | Ravi | Raj | Textile | 1718 |
| 3 | Ravi | Raj | Textile | 1818 |
| 4 | Ravi | Raj | Textile | 1917 |
| 5 | Ravi | Raj | Textile | NULL |
| 6 | Ravi | Raj | Textile | NULL |
| 7 | Ravi | Raj | Textile | NULL |
+----+-------+------+------------+------+
constructing query using criteria builder
public List<Employee> getEmployees() {
CriteriaBuilder cb = entityManager.getCriteriaBuilder();
CriteriaQuery<Employee> q = cb.createQuery(Employee.class);
Root<Employee> emp = q.from(Employee.class);
Predicate year_like = cb.like(emp.<Integer>get("year").as(String.class), "%17%");
CriteriaQuery<Employee> fq = q.where(year_like);
List<Employee> resultList = (List<Employee>) entityManager.createQuery(fq).getResultList();
return resultList;
}
query generated(using show_sql: true)
Hibernate: select employee0_.id as id1_0_, employee0_.first as first2_0_, employee0_.last as last3_0_, employee0_.occupation as occupati4_0_, employee0_.year as year5_0_ from Employee employee0_ where cast(employee0_.year as char) like ?
Query Output
// i have printed only id and year in the console
id, year
2, 1718
4, 1917
------------------------------------------------------------
Alternate way
LIKE worked in JPA for numeric field when Tested with JPA, hibernate, mysql.
Note:- May not work with other jpa providers
Query r = entityManager.createQuery("select c from Employee c where c.year like '%17%'");
query fired(using show_sql=true)
Hibernate: select employee0_.id as id1_0_, employee0_.first as first2_0_, employee0_.last as last3_0_, employee0_.occupation as occupati4_0_, employee0_.year as year5_0_ from Employee employee0_ where employee0_.year like '%17%'
Query Result
// i have printed only id and year in the console
id, year
2, 1718
4, 1917

You can declare your own Criterion type
public class CrazyLike implements Criterion {
private final String propertyName;
private final int intValue;
public CrazyLike(String propertyName, int intValue) {
this.propertyName = propertyName;
this.intValue = intValue;
}
#Override
public String toSqlString(Criteria criteria, CriteriaQuery criteriaQuery)
throws HibernateException {
final String[] columns = criteriaQuery.findColumns( propertyName, criteria );
if ( columns.length != 1 ) {
throw new HibernateException( "Crazy Like may only be used with single-column properties" );
}
final String column = columns[0];
return "cast(" + column + " as text) like '%" + intValue + "%'";
}
#Override
public TypedValue[] getTypedValues(Criteria criteria,
CriteriaQuery criteriaQuery) throws HibernateException {
return new TypedValue[] { };
}
}
And then use it like this:
Criteria criteria = session.createCriteria(Person.class);
List<Person> persons = criteria.add(new CrazyLike("year", 17)).list();
assuming that Person has an int property called year. This should produce a SQL like this:
select
this_.id as id1_2_0_,
this_.birthdate as birthdat2_2_0_,
this_.firstname as firstnam3_2_0_,
this_.lastname as lastname4_2_0_,
this_.ssn as ssn5_2_0_,
this_.version as version6_2_0_,
this_.year as year7_2_0_
from
Person this_
where
cast(this_.year as text) like '%17%'
This was tested with Postgres. The cast() syntax may vary for your database engine. If it is, just use that syntax in the Criterion class that you implement.

Related

Update column with the same value apart from an object removed in column in Sqlite

I want to remove an object from a json column in Sqlite and I can't make it work. The json column contains a nested object, has the following type:
{
a: number;
pair: {
field1: string;
field2: string;
}[]
}
I want to update the column "ArrayColumn" with the same values but remove the object that has field1 equal to "0" and field2 equal to "1" . Every row contains the "pair" array, but not all the "pair" arrays in ArrayColumn contain this value ({"field1":"0", "field2":"1"})
I have the following structure:
Id| ArrayColumn
--------------------------------------------------------------------------------------------
1 | { "a":1, "pair":[{"field1":"0", "field2":"1"},{"field1":"C", "field2":"D"},{"field1":"E", "field2":"F"}] }
2 | { "a":5, "pair":[{"field1":"C", "field2":"D"},{"field1":"E", "field2":"F"}] }
3 | { "a":8, "pair":[{"field1":"G", "field2":"G"},{"field1":"0", "field2":"1"},{"field1":"A", "field2":"A"}] }
4 | { "a":1, "pair":[{"field1":"F", "field2":"T"},{"field1":"C", "field2":"D"},{"field1":"0", "field2":"1"}] }
5 | { "a":1, "pair":[{"field1":"A", "field2":"B"}] }
After updating the rows, the values would be:
Id| ArrayColumn
--------------------------------------------------------------------------------------------
1 | { "a":1, "pair":[{"field1":"C", "field2":"D"},{"field1":"E", "field2":"F"}] }
2 | { "a":5, "pair":[{"field1":"C", "field2":"D"},{"field1":"E", "field2":"F"}] }
3 | { "a":8, "pair":[{"field1":"G", "field2":"G"},{"field1":"A", "field2":"A"}] }
4 | { "a":1, "pair":[{"field1":"F", "field2":"T"},{"field1":"C", "field2":"D"}] }
5 | { "a":1, "pair":[{"field1":"A", "field2":"B"}] }
I tried with JSON_TREE but can't make it work.
I was thinking that the first step would be to select all the rows that contain that value, I retreived them using these 2 ways:
With LIKE operator searching for the stringified form:
select Id, json_extract(json(par), '$.pair') as pair from Table pair like '%{"field1":"0","field2":"1"}%'
Using json_tree
select Id, value from Table, json_tree(Table.ArrayColumn, '$.pair' ) where json_extract(value, '$.field1' ) = '0' AND json_extract(value, '$.field2' ) = '1'
I tried using json_remove with this small example but no luck:
SELECT json_remove('[{"field1":"1","field2":"0"},{"field1":"A","field2":"B"}]', '${"field1":"1","field2":"0"}' )
I tried using json_remove but had no luck.
Thank you
For this sample data the simplest way to do this is to treat the json column as a string and use string functions to remove the value that you want:
UPDATE tablename
SET ArrayColumn = REPLACE(REPLACE(REPLACE(ArrayColumn, ']', ',]'), '{"field1":"0", "field2":"1"},', ''), ',]', ']')
WHERE ArrayColumn LIKE '%{"field1":"0", "field2":"1"}%';
See the demo.

How can I extract a json column into new columns automatically in Snowflake SQL?

This is as example taken from another thread, but essentially I would like to achieve this:
Sample data
ID Name Value
1 TV1 {"URL": "www.url.com", "Icon": "some_icon"}
2 TV2 {"URL": "www.url.com", "Icon": "some_icon", "Facebook": "Facebook_URL"}
3 TV3 {"URL": "www.url.com", "Icon": "some_icon", "Twitter": "Twitter_URL"}
..........
Expected output
ID Name URL Icon Facebook Twitter
1 TV1 www.url.com some_icon NULL NULL
2 TV2 www.url.com some_icon Facebook_URL NULL
3 TV3 www.url.com some_icon NULL Twitter_URL
I'm totally new to Snowflake so I'm shaking my head on how to do this easily (and hopefully automatically, in the case where some rows might have more elements in the json than other rows, which would be tedious to assign manually). Some lines might have sub-categories too.
I found the parse_json function for Snowflake, but it's only giving me the same json column in a new column, still in json format.
TIA!
You can create a view over your table with the following SELECT:
SELECT ID,
Name,
Value:URL::varchar as URL,
Value:Icon::varchar as Icon,
Value:Facebook::varchar as Facebook,
Value:Twitter::varchar as Twitter
FROM tablename;
Additional attributes will be ignored unless you add them to the view. There is no way to "automatically" include them into the view, but you could create a stored procedure that dynamically generates the view based on all the attributes that are in the full variant content of a table.
You can create a SP to automatically build the CREATE VIEW for you based on the JSON data in the VARIANT.
I have some simple example below:
-- prepare the table and data
create or replace table test (
col1 int, col2 string,
data1 variant, data2 variant
);
insert into test select 1,2, parse_json(
'{"URL": "test", "Icon": "test1", "Facebook": "http://www.facebook.com"}'
), parse_json(
'{"k1": "test", "k2": "test1", "k3": "http://www.facebook.com"}'
);
insert into test select 3,4,parse_json(
'{"URL": "test", "Icon": "test1", "Twitter": "http://www.twitter.com"}'
), parse_json(
'{"k4": "v4", "k3": "http://www.ericlin.me"}'
);
-- create the SP, we need to know which table and
-- column has the variant data
create or replace procedure create_view(
table_name varchar
)
returns string
language javascript
as
$$
var final_columns = [];
// first, find out the columns
var query = `SHOW COLUMNS IN TABLE ${TABLE_NAME}`;
var stmt = snowflake.createStatement({sqlText: query});
var result = stmt.execute();
var variant_columns = [];
while (result.next()) {
var col_name = result.getColumnValue(3);
var data_type = JSON.parse(result.getColumnValue(4));
// just use it if it is not a VARIANT type
// if it is variant type, we need to remember this column
// and then run query against it later
if (data_type["type"] != "VARIANT") {
final_columns.push(col_name);
} else {
variant_columns.push(col_name);
}
}
var columns = {};
query = `SELECT ` + variant_columns.join(', ') + ` FROM ${TABLE_NAME}`;
stmt = snowflake.createStatement({sqlText: query});
result = stmt.execute();
while (result.next()) {
for(i=1; i<=variant_columns.length; i++) {
var sub_result = result.getColumnValue(i);
if(!sub_result) {
continue;
}
var keys = Object.keys(sub_result);
for(j=0; j<keys.length; j++) {
columns[variant_columns[i-1] + ":" + keys[j]] = keys[j];
}
}
}
for(path in columns) {
final_columns.push(path + "::STRING AS " + columns[path]);
}
var create_view_sql = "CREATE OR REPLACE VIEW " +
TABLE_NAME + "_VIEW\n" +
"AS SELECT " + "\n" +
" " + final_columns.join(",\n ") + "\n" +
"FROM " + TABLE_NAME + ";";
snowflake.execute({sqlText: create_view_sql});
return create_view_sql + "\n\nVIEW created successfully.";
$$;
Execute the SP will return below string:
call create_view('TEST');
+---------------------------------------+
| CREATE_VIEW |
|---------------------------------------|
| CREATE OR REPLACE VIEW TEST_VIEW |
| AS SELECT |
| COL1, |
| COL2, |
| DATA1:Facebook::STRING AS Facebook, |
| DATA1:Icon::STRING AS Icon, |
| DATA1:URL::STRING AS URL, |
| DATA2:k1::STRING AS k1, |
| DATA2:k2::STRING AS k2, |
| DATA2:k3::STRING AS k3, |
| DATA1:Twitter::STRING AS Twitter, |
| DATA2:k4::STRING AS k4 |
| FROM TEST; |
| |
| VIEW created successfully. |
+---------------------------------------+
Then query the VIEW:
SELECT * FROM TEST_VIEW;
+------+------+-------------------------+-------+------+------+-------+-------------------------+------------------------+------+
| COL1 | COL2 | FACEBOOK | ICON | URL | K1 | K2 | K3 | TWITTER | K4 |
|------+------+-------------------------+-------+------+------+-------+-------------------------+------------------------+------|
| 1 | 2 | http://www.facebook.com | test1 | test | test | test1 | http://www.facebook.com | NULL | NULL |
| 3 | 4 | NULL | test1 | test | NULL | NULL | http://www.ericlin.me | http://www.twitter.com | v4 |
+------+------+-------------------------+-------+------+------+-------+-------------------------+------------------------+------+
Query the source table:
SELECT * FROM TEST;
+------+------+------------------------------------------+-----------------------------------+
| COL1 | COL2 | DATA1 | DATA2 |
|------+------+------------------------------------------+-----------------------------------|
| 1 | 2 | { | { |
| | | "Facebook": "http://www.facebook.com", | "k1": "test", |
| | | "Icon": "test1", | "k2": "test1", |
| | | "URL": "test" | "k3": "http://www.facebook.com" |
| | | } | } |
| 3 | 4 | { | { |
| | | "Icon": "test1", | "k3": "http://www.ericlin.me", |
| | | "Twitter": "http://www.twitter.com", | "k4": "v4" |
| | | "URL": "test" | } |
| | | } | |
+------+------+------------------------------------------+-----------------------------------+
You can refine this SP to detect nested data and have them added to the columns list as well.

How to automate a field mapping using a table in snowflake

I have one column table in my snowflake database that contain a JSON mapping structure as following
ColumnMappings : {"Field Mapping": "blank=Blank,E=East,N=North,"}
How to write a query that if I feed the Field Mapping a value of E I will get East or if the value if N I will get North so on and so forth without hard coding the value in the query like what CASE statement provides.
You really want your mapping in this JSON form:
{
"blank" : "Blank",
"E" : "East",
"N" : "North"
}
You can achieve that in Snowflake e.g. with a simple JS UDF:
create or replace table x(cm variant) as
select parse_json(*) from values('{"fm": "blank=Blank,E=East,N=North,"}');
create or replace function mysplit(s string)
returns variant
language javascript
as $$
res = S
.split(",")
.reduce(
(acc,val) => {
var vals = val.split("=");
acc[vals[0]] = vals[1];
return acc;
},
{});
return res;
$$;
select cm:fm, mysplit(cm:fm) from x;
-------------------------------+--------------------+
CM:FM | MYSPLIT(CM:FM) |
-------------------------------+--------------------+
"blank=Blank,E=East,N=North," | { |
| "E": "East", |
| "N": "North", |
| "blank": "Blank" |
| } |
-------------------------------+--------------------+
And then you can simply extract values by key with GET, e.g.
select cm:fm, get(mysplit(cm:fm), 'E') from x;
-------------------------------+--------------------------+
CM:FM | GET(MYSPLIT(CM:FM), 'E') |
-------------------------------+--------------------------+
"blank=Blank,E=East,N=North," | "East" |
-------------------------------+--------------------------+
For performance, you might want to make sure you call mysplit only once per value in your mapping table, or even pre-materialize it.

Filtering records city and state

I have 2 tables with name = city and state
city
id_city | name_city
1 | JED
2 | RUD
3 | DMM
state
id_state | id_for_city | name_state
1 | 1 | JED1
2 | 1 | JED2
3 | 2 | RUH1
4 | 2 | RUH2
I used ComboBox and i have 2
first combobox1 select name_city (it's ok )
second combobox2 i want select name_state through id_for_city but it left join id_city (here it,s not okay )
how i can write query by using left join in java ?
my code :
frst comboBox1 i think it,s ok
public void Filecombo() {
try {
String sql = "select name_city from city";
pstmt = conn.prepareStatement(sql);
rs = pstmt.executeQuery();
while (rs.next()) {
options.add(rs.getString("name_city"));
}
comboCity.setItems(options);
pstmt.close();
rs.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
second comboBox2 (here probloem)
public void Filecombo2() {
try {
String sq2 = " select name_state from state left join city on city.id_city= state.id_from_city";
pstmt2 = conn.prepareStatement(sq2);
rs = pstmt2.executeQuery();
while (rs.next()) {
options2.add(rs.getString("name_state"));
}
comboBranch.setItems(options2);
pstmt.close();
rs.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
the result If i want to select different cities like RUH or DMM or JED
combobox2 will appear Anything related combobox1
Your question does not seem to have much to do with Java, and I'll assume that your JDBC code is basically working, and you are already getting a result set, albeit perhaps not exactly what you want. I think you just need to add a WHERE clause to your query:
SELECT c.name_state
FROM state s
INNER JOIN city c
ON c.id_city = s.id_for_city
WHERE
s.name_city = 'JED';
Note that I replaced the left join with an inner join, since you want state names only, which come from the city table. A left join would be desirable if you wanted to return a NULL for the case of states which did not match to anything in the other table. But, that doesn't appear to be the case here.

cucumber-jvm: compare datatables ignoring certain columns

I want to compare a DataTable from feature file against one on page, but want to ignore certain date time fields. Is there a straight way to do this? Thanks.
Missing fields are ignored when comparing a DataTable against a list of objects, e.g.:
static class SomeBean {
String field1;
String field2;
String field3;
public SomeBean(String field1, String field2, String field3) {
this.field1 = field1;
this.field2 = field2;
this.field3 = field3;
}
}
DataTable expectationBeanTable = DataTable.create(Arrays.asList(
new SomeBean("value1", "value2", null)
));
List<SomeBean> actual = Arrays.asList(
new SomeBean("value1", "value2", "value3")
);
expectationBeanTable.diff(actual); //OK
DataTable expectationStringTable = DataTable.create(Arrays.asList(
Arrays.asList("field1", "field2"),
Arrays.asList("value1", "value2")
));
expectationStringTable.diff(actual); //Also OK
Won't work when comparing two DataTables though:
expectationStringTable.diff(DataTable.create(actual));
java.lang.IllegalArgumentException: Tables must have equal number of columns:
| field1 | field2 |
| value1 | value2 |
| field1 | field2 | field3 |
| value1 | value2 | value3 |