SQL: Ordering by how much greater than something is? - sql

I have two datetime fields here: actual_delivery and scheduled_delivery
What I want to do an ORDER BY on is how much great actual_delivery is than scheduled_delivery.
I'm using MySQL locally and PostgreSQL in production, so it needs to work for both.

If I were doing it in SQL Server I'd calculate DATEDIFF(actual_delivery, scheduled_deliver) AS [DeliveryDifference] then order by that computed column.
A quick search indicates there's a datediff function in MySql but the syntax may be slightly different in PostgreSQL so you may have to create your own function there.

Try this:
SELECT actual_delivery, scheduled_delivery, actual_delivery - scheduled_delivery as difference FROM tablename ORDER BY difference

Related

TABLE_DATE_RANGE for xxxx_yyyymm format tables

I'm having a problem trying to query for 15 months worth of data.
I know about bigquery's wildcard functions, but I can't seem to get them to work with my tables.
For example, if my tables are called:
xxxx_201501,
xxxx_201502,
xxxx_201503,
...
xxxx_201606
How can I select everything from 201501 until today (current_timestamp)?
It seems that it's necessary to have the tables per day, am I wrong?
I've also read that you can use regex but can't find the way.
With Standard SQL, you can use a WHERE clause on a _TABLE_SUFFIX pseudo column as described here:
Is there an equivalent of table wildcard functions in BigQuery with standard SQL?
In this particular case, it would be:
SELECT ... from `mydataset.xxx_*` WHERE _TABLE_SUFFIX >= '201501';
This is a bit long for a comment.
If you are using the standard SQL dialect, then I don't think the functionality is yet implemented.
If you are using the legacy SQL dialect, then you can use a function such as TABLE_DATE_RANGE(). This and other table wildcard functions are well documented.
EDIT:
Oh, I see. The simplest way would be to store the tables as YYYYMM01 so you can use the range query.
But, you can also use table_query():
from table_query(t, 'right(table_id, 6) >= ''201501'' ')

Subtract hours from SQL Server 2012 query result

I am running queries on an alarm system signal automation platform database in SQL Server 2012 Management Studio, and I am running into some hiccups.
My queries run just fine, but I am unable to refine my results to the level that I would like.
I am selecting some columns that are formatted as DATETIME, and I simply want to take the value in the column and subtract 4 hours from it (i.e., from GMT to EST) and then output that value into the query results.
All of the documentation I can find regarding DATESUB() or similar commands are showing examples with a specific DATETIME in the syntax, and I don't have anything specific, just 138,000 rows with columns I want to adjust time zones for.
Am I missing something big or will I just need to continue to adjust manually after I being manipulating my query result? Also, in case it makes a difference, I have a read-only access level, and am not interested in altering table data in any way.
Well, for starters, you need to know that you aren't restricted to use functions only on static values, you can use them on columns.
It seems that what you want is simply:
SELECT DATEADD(HOUR,-4,YourColumnWithDateTimes)
FROM dbo.YourTable
Maybe it will be helpful
SELECT DATEPART(HOUR, GETDATE());
DATEPART docs from MSDN

BETWEEN two dates in an Access database using SQL

Me again, sorry.
Trying to get all the data between two dates in my Access Database. The data type for the column is "Date/Time" which I believe is correct.
SELECT *
FROM SaleProperty
WHERE MarketDate BETWEEN '01/05/2013' AND '30/06/2013';
When I use this, I am told that there is a data mismatch and the query does not work. Sorry again for what is probably a very basic question but I have no idea what I am doing wrong.
Cheers again!
Access (Jet) uses # to identify a date literal. Try this:
SELECT *
FROM SaleProperty
WHERE MarketDate BETWEEN #01/05/2013# AND #30/06/2013#;

Grouping by month in database and not with ruby

I'm trying to group calls by month but I need to do it in the database and not with ruby. Here is the current code:
Call.limit(1000).group_by { |t| t.created_at.month }
Which returns:
SELECT `calls`.* FROM `calls` ORDER BY created_at desc LIMIT 1000
Then ruby does the grouping. What should I do to make the database do the work ?
Thank you.
The short answer, is that you cannot achieve the same result at SQL level.
Here's the full explanation.
First of all, what should be the result of that call? You can use the PG/SQL Group BY statement, however it's likely the result is not what you expect.
The Group By syntax is designed to group rows with a pattern, and compute and aggregate function. In your case, even assuming you create a query that uses date_trunc to group by a part of the timestamp, the aggregate function does not permit you to return a dataset structured like the Ruby group_by method.
Why do you want to compute such grouping at database level?
If you have specific requirements or computation limits, then work on a custom method.
Use Call.limit(1000).group("month(created_at)")
Please checkout mysql date-time methods appropriate in your case. But .group() will do the mysql grouping.
http://dev.mysql.com/doc/refman/5.1/en/date-and-time-functions.html#function_month

Search in every column

I'm building an abstract gem. i need a sql query that looks like this
SELECT * FROM my_table WHERE * LIKE '%my_search%'
is that possible?
edit:
I don't care about querys performance because it's a feature function of a admin panel, which is used once a month. I also don't know what columns the table has because it's so abstract. Sure i could use some rails ActiveRecord functions to find all the columns but i hoped to avoid adding this logic and just using the *. It's going to be a gem, and i can't know what db is going to be used with it. Maybe there is a sexy rails function that helps me out here.
As I understand the question, basically you are trying to build a sql statement which should check for a condition across all columns in that table. A dirty hack, but this generates the required Sql.
condition_string = MyTable.column_names.join(' LIKE ? OR ')
MyTable.all(:conditions => [condition_string, '%my_search%'])
However, this is not tested. This might work.
* LIKE '...' isn't valid according to the SQL standards, and not supported by any RDBMS I'm aware of. You could try using a function like CONCAT to make the left argument of LIKE, though performance won't be good. As for SELECT *, it's generally something to be avoided.
No, SQL does not support that syntax.
To search all columns you need to use procedures or dynamic SQL. Here's another SO question which may help:
SQL: search for a string in every varchar column in a database
EDIT: Sorry, the question I linked to is looking for a field name, not the data, but it might help you write some dynamically SQL to build the query you need.
You didn't say which database you are using, as there might be a vendor specific solution.
Its only an Idea, but i think it worth testing!
It depends on your DB you can get all Columns of a table, in MSSQL for example you can use somethink like:
select name from syscolumns where id=object_id('Tablename')
Under Oracle guess its like:
select column_name from USER_TAB_COLUMNS where TABLE_NAME = 'Tablename'
and then you will have to go through these columns usign a procedure and maby a cursor so you can check for each Column if the data your searching for is in there:
if ((select count(*) from Tablename where Colname = 'searchingdata') > 0)
then keep the results in a separated table(ColnameWhereFound, RecNrWhereFound).
The matter of Datatye may be an Issue if you try to compare strings with numbers, but if you notice for instance under SQL-Server the syscolumns table contains a column called "usertype" which contains a number seems to refer to the Datatype stored in the Columne, like 2 means string and 7 means int, and 2 means smallint, guess Oracle would have something similar too.
Hope this helps.