This is with reference to https://realm.io/docs/javascript/latest/api/tutorial-query-language.html
I am not looking for variable substitution syntax as mentioned in the documentation.
So i have a date field by the name createDate, and i am trying to query on the same.
The filter query looks like createDate = ${someDate}, where somedate is in the format 'YYYY-MM-DD#HH:mm:ss', i also tried == instead of = in the query, but this simply does not work;
its hard to say without seeing how you're actually writing the query, but it could be how you're constructing it. maybe try
filtered('createDate = $0', someDate)
using a placeholder variable ($0) solves query problems in realm for me
Related
I have a problem where the fix is to exchange what gets filtered first, but I'm not sure if that is even possible and not knowledgeable enough how it works.
To give an example:
Here is a table
When you filter this using the ff query:
select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
by logic this will be correct as the Asus component with a character in its ID will be filtered and will prevent an ORA-01722 error.
But to my experience this is inconsistent.
I tried using the same filtering in two different DB connections, the first one didn't get the error (as expected) but other one got an ORA-01722 error.
Checking the explain plan the difference in the two DB's is the ff:
I was thinking if its possible to make sure that the Parts got filtered first before the ID but I'm unable to find anything when i was searching, is this even possible, if not, what is a fix for this issue without relying on using TO_CHAR
I assume you want to (sort of) fix a buggy program without changing the source code.
According to your image, you are using "Filter Predicates", this normally means Oracle isn't using index (though I don't know what displays execution plans this way).
If you have an index on PARTS, Oracle will probably use this index.
create index myindex on mytable (parts);
If Oracle thinks this index is inefficient, it may still use full table scan. You may try to 'fake' Oracle into thinking this an efficient index by lying about the number of distinct values (the more distinct values, the more efficient)
exec dbms_stats.set_index_stats(ownname => 'myname', indname => 'myindex', numdist => 100000000)
Note: This WILL impact performance of other querys using this table
"Fix" is rather simple: take control over what you're doing.
It is evident that ID column's datatype is VARCHAR2. Therefore, don't make Oracle guess, instruct it what to do.
No : select * from pcparts where Parts = 'Monitor' and id = 255322 and Brand = 'Asus'
Yes: select * from pcparts where Parts = 'Monitor' and id = '255322' and Brand = 'Asus'
--------
VARCHAR2 column's value enclosed into single quotes
I am trying to run a query of the following form with jooq in Kotlin:
val create = DSL.using(SQLDialect.POSTGRES)
val query: Query = create.select().from(DSL.table(tableName))
.where(DSL.field("timestamp").between("1970-01-01T00:00:00Z").and("2021-11-05T00:00:00Z"))
.orderBy(DSL.field("id").desc())
The code above gives me:
syntax error at or near \"and\
Also, looking at this query in the debugger, the query.sql renders as:
select * from data_table where timestamp between ? and ? order by id desc
I am not sure if the ? indicates that it could not render the values to SQL or somehow they are some sort of placeholders..
Also, the code works without the where chain.
Additionally, on the Postgres command line I can run the following and the query executes:
select * from data_table where timestamp between '1970-01-01T00:00:00Z' and '2021-11-05T00:00:00Z' order by id
Querying the datatypes on the schema, the timestamp column type is rendered as timestamp without time zone.
Before I had declared variables as:
val lowFilter = "1970-01-01T00:00:00Z"
val highFilter = "2021-11-05T00:00:00Z"
and this did not work and it seems passing raw strings does not work either. I am very new to this, so I am pretty sure I am messing up the usage here.
EDIT
Following #nulldroid suggestion, did something like:
.where(DSL.field("starttime").between(DSL.timestamp("1970-01-01T00:00:00Z")).and(DSL.timestamp("2021-11-05T00:00:00Z")))
and this resulted in:
Type class org.jooq.impl.Val is not supported in dialect POSTGRES"
Not using the code generator:
I'm going to assume you have a good reason not to use the code generator for this particular query, the main reason usually being that your schema is dynamic.
So, the correct way to write your query is this:
create.select()
.from(DSL.table(tableName))
// Attach a DataType to your timestamp field, to let jOOQ know about this
.where(DSL.field("timestamp", SQLDataType.OFFSETDATETIME)
// Use bind values of a temporal type
.between(OffsetDateTime.parse("1970-01-01T00:00:00Z"))
.and(OffsetDateTime.parse("2021-11-05T00:00:00Z")))
.orderBy(DSL.field("id").desc())
Notice how I'm using actual temporal data types, not strings to compare dates and declare fields.
I'm assuming, from your question's UTC timestamps, that you're using TIMESTAMPTZ. Otherwise, if you're using TIMESTAMP, just replace OffsetDateTime by LocalDateTime...
Using the code generator
If using the code generator, which is always recommended if your schema isn't dynamic, you'd write almost the same thing as above, but type safe:
create.select()
.from(MY_TABLE)
// Attach a DataType to your timestamp field, to let jOOQ know about this
.where(MY_TABLE.TIMESTAMP
// Use bind values of a temporal type
.between(OffsetDateTime.parse("1970-01-01T00:00:00Z"))
.and(OffsetDateTime.parse("2021-11-05T00:00:00Z")))
.orderBy(MY_TABLE.ID.desc())
I'm new with Airflow and I'm currently stuck on an issue with the Bigquery operator.
I'm trying to execute a simple query on a table from a given dataset and copy the result on a new table in the same dataset. I'm using the bigquery operator to do so, since according to the doc the 'destination_dataset_table' parameter is supposed to do exactly what I'm looking for (source:https://airflow.apache.org/docs/stable/_api/airflow/contrib/operators/bigquery_operator/index.html#airflow.contrib.operators.bigquery_operator.BigQueryOperator).
But instead of copying the data, all I get is a new empty table with the schema of the one I'm querying from.
Here's my code
default_args = {
'owner':'me',
'depends_on_past':False,
'start_date':datetime(2019,1,1),
'end_date':datetime(2019,1,3),
'retries':10,
'retry_delay':timedelta(minutes=1),
}
dag = DAG(
dag_id='my_dag',
default_args=default_args,
schedule_interval=timedelta(days=1)
)
copyData = BigQueryOperator(
task_id='copyData',
dag=dag,
sql=
"SELECT some_columns,x,y,z FROM dataset_d.table_t WHERE some_columns=some_value",
destination_dataset_table='dataset_d.table_u',
bigquery_conn_id='something',
)
I don't get any warnings or errors, the code is running and the tasks are marked as success. It does create the table I wanted, with the columns I specified, but totally empty.
Any idea what I'm doing wrong?
EDIT: I tried the same code on a much smaller table (from 10Gb to a few Kb), performing a query with a much smaller result (from 500Mb to a few Kb), and it did work this time. Do you think the size of the table/the query result matters? Is it limited? Or does performing a too large query cause a lag of some sort?
EDIT2: After a few more tests I can confirm that this issue is not related to the size of the query or the table. It seems to have something to do with the Date format. In my code the WHERE condition is actually checking if a date_column = 'YYYY-MM-DD'. When I replace this condition with an int or string comparison it works perfectly. Do you guys know if Bigquery uses a particular date format or requires a particular syntax?
EDIT3: Finally getting somewhere: When I cast my date_column as a date (CAST(date_column AS DATE)) to force its type to DATE, I get an error that says that my field is actually an int-32 (Argument type mismatch). But I'm SURE that this field is a date, so that implies that either Bigquery stores it as an int while displaying it as a date, or that the Bigquery operator does some kind of hidden type conversion while loading tables. Any idea on how to fix this?
I had a similar issue when transferring data from other data sources than big-query.
I suggest casting the date_column as follows: to_char(date_column, 'YYYY-MM-DD') as date
In general, I have seen that big-query auto detection schema is often problematic. The safest way is to always specify schema before executing its corresponding query, or use operators that support schema definition.
Alright so I understand the point of the HAVING clause. I am having an issue and I am wondering if I can solve this the way I want to.
I want to execute one query using ADODB.Recordset and then use the Filter function to sift through the data set.
The problem is the query at the moment which looks like this:
SELECT tblMT.Folder, tblMT.MTDATE, tblMT.Cust, Sum(tblMT.Hours)
FROM tblMT
GROUP BY tblMT.Folder, tblMT.MTDATE, tblMT.Cust
HAVING tblMT.Cust LIKE "TEST*" AND Min(tblMT.MTDATE)>=Date()-30 AND MAX(tblMT.MTDATE)<=Date()
ORDER BY tblMT.TheDATE DESC;
So the above works as expected.... however I want to be able to use the tblMT.Cust as the filter without having to keep re querying the database. If I remove it I get a:
Data type mismatch in criteria expression.
Is what I am trying to do possible? If someone can point me in the right direction here would be great.
Ok... the type mismatch is caused because either tblmt.mtdate isn't a date field or tblmt.hours isn't a number field AND you have data that either isn't a date or isn't a number when the customer isn't like 'TEST*'. Or, for some customers, you have a NULL in mt.date and null can't be compared with >=. you'd still get the error if you said where tblMt.cust not like "TEST*" too.
Problem is likely with the data or your expectation and you need to handle it.
What data types are tblMT.hours and tblMt.MtDate?
Assume mytable is an Oracle table and it has a field called id. The datatype of id is NUMBER(8). Compare the following queries:
select * from mytable where id like '715%'
and
select * from mytable where id between 71500000 and 71599999
I would think the second is more efficient since I think "number comparison" would require fewer number of assembly language instructions than "string comparison". I need a confirmation or correction. Please confirm/correct and throw any further comment related to either operator.
UPDATE: I forgot to mention 1 important piece of info. id in this case must be an 8-digit number.
If you only want values between 71500000 and 71599999 then yes the second one is much more efficient. The first one would also return values between 7150-7159, 71500-71599 etc. and so forth. You would either need to sift through unecessary results or write another couple lines of code to filter the rest of them out. The second option is definitely more efficient for what you seem to want to do.
It seems like the execution plan on the second query is more efficient.
The first query is doing a full table scan of the id's, whereas the second query is not.
My Test Data:
Execution Plan of first query:
Execution Plan of second query:
I don't like the idea of using LIKE with a numeric column.
Also, it may not give the results you are looking for.
If you have a value of 715000000, it will show up in the query result, even though it is larger than 71599999.
Also, I do not like between on principle.
If a thing is between two other things, it should not include those two other things. But this is just a personal annoyance.
I prefer to use >= and <= This avoids confusion when I read the query. In addition, sometimes I have to change the query to something like >= a and < c. If I started by using the between operator, I would have to rewrite it when I don't want to be inclusive.
Harv
In addition to the other points raised, using LIKE in the manner you suggest would cause Oracle to not use any indexes on the ID column due to the implicit conversion of the data from number to character, resulting in a full table scan when using LIKE versus and index range scan when using BETWEEN. Assuming, of course, you have an index on ID. Even if you don't, however, Oracle will have to do the type conversion on each value it scans in the LIKE case, which it won't have to do in the other.
You can use math function, otherwise you have to use to_char function to use like, but it will cause performance problems.
select * from mytable where floor(id /100000) = 715
or
select * from mytable where floor(id /100000) = TO_NUMBER('715') // this is parametric