I've found a few questions on this but none seem to fit my problem case quite right.
Overview: Data is in Oracle 10g database, requirement including using MS Access as a front end.
Problem: The tables include date fields which are incompatible with MS Access. I NEED to run queries based on date and time in MS Access
Details:
I'm not allowed to redesign the tables
Decided to create new tables on the server and run inserts from the old tables to the new
Probably sounds weird but given the constraints I'm allowed to do what I want if I duplicate the data
With the new tables I want to take the date/time/timezone field from the old and insert it into a new table with the date/time but strip the timezone, put it in a field by itself
The big requirement is to have the data be usable. If I do a TO_CHAR it becomes a string and I can't setup queries based on date and time with that as it's a static text field at that point.
Any help is appreciated! Thanks !!!
If possible the best way I have found to deal with these issues is to link to the table via a view. You can then present the data how you wish under the hood without having to alter the table structures.
I found an answer for this. Looking here:
Oracle Date Functions
They give some samples wrapping a to_char with to_date. I formatted this in a way to convert it to text stripping the time zone then wrapped it with the to_date to convert it back to a date and time field that's compatible with MS Access. Here's the code:
SELECT TO_DATE(TO_CHAR(table.date, 'DD-MON-YYYY HH24:MI:SS'),'DD-MON-YYYY HH24:MI:SS')
FROM table.date;
Related
I want to start by saying my SQL knowledge is limited (the sololearn SQL basics course is it), and I have fallen into a position where I am regularly asked to pull data from the SQL database for our ERP software. I have been pretty successful so far, but my current problem is stumping me.
I need to filter my results by having the date match from 2 separate tables.
My issue is that one of the tables outputs DATETIME with full time data. e.g. "2022-08-18 11:13:09.000"
While the other table zeros the time data. e.g. "2022-08-18 00:00:00.000"
Is there a way I can on the fly convert these to just a DATE e.g. "2022-08-18" so I can set them equal and get the results I need?
A simple CAST statement should work if I understand correctly.
CAST( dateToConvert AS DATE)
I have a two tables in a database in AWS Athena that I want to join.
I want to join them by several columns, one of them being date.
However in one data set the date string is encoded for single value months is encoded as
"08/31/2018"
While the other would have it encoded as
"8/31/2018"
Is there a way to make them the same format?
I am unsure if it is easier to add the extra 0 to strings which have lack the extra 0 or to concatenate strings which have the extra 0.
Based on what I have researched I think I will have to use the CASE and CONCAT functions.
Both of the tables were loaded into the database from a CSV file, and the variables are in the string format.
I have tried changing the values manually in the CSV file, tried running an R script on one of the tables to format the date in the same way, and have also tried re-loading the tables into the database as the same date format.
However no matter what I do whenever it is loaded into the database, even when they have the same date type, it always loads them with different formats.
One with the the extra 0 and the other without it.
The last avenue I haven't tried is through a SQL query.
However I am not well versed in Athena and am having a hard time formatting this query.
I know this is rather vague, so please ask me for more information if you need.
If someone could help me start this query I would be grateful.
Thank you for the help.
Here is the query for changing dates in Athena.
date_parse(table.date_variable,'%m/%d/%Y')
Though Athena tables are immutable once created.
You can convert the value to date using date_parse(). So, this should work:
date_parse(t1.datecol, '%m/%d/%Y') = str_to_date(t2.datecol, '%m/%d/%Y')
Having said that, you should fix the data model. Store dates as dates not as strings! Then you can use an equality join and that is just better all around.
In my current application (using SQL 2008) we use a TimeStamp column in each table. When the application reads a row of data we send the timestamp value with the data. When they try to save any changes, we compare the timestamp columns to see if the row was modified by anyone else since it was read. If there has been a change we reject the update and tell them to refresh the data and try again so they can see what was changed and be sure they are not overwriting anything important without knowing. If the timestamps match then we allow the update and send them the new timestamp (in case they want to make more changes).
In SQL 2016 memory optimized tables, they no longer support this column type. They do have row versioning which is great, but is there a way to extract the “timestamp” when the record was created so we can use it in the same way? Is there a new way of doing this we can use instead?
I appreciate any help you can offer.
You could just add a ROWVERSION column to your table? Or if you only
need it to satisfy a need for a column in the correct format, and the
actually versioning is not required you could just generate a column
on the fly. Since TIMESTAMP (now replaced by ROWVERSION) is
VARBINARY(8) you could just convert your Timestamp column to this -
SELECT [Timestamp] = CONVERT(VARBINARY(8), Timestamp_S) – GarethD Jul
12 '16 at 13:09
This was taken from SQL Server 2016 timestamp data type, maybe that can help?
IN Netezza , I am trying to check if date value is valid or not ; something like ISDATE function in SQL server.
I am getting dates like 11/31/2013 which is not valid, how in Netezza I can check if this date is valid so I exclude them from my process.
Thanks
I don't believe there is a built-in Netezza function to check if a date is valid. You may be able to write a LUA function to do this, or you could try joining to a "Date" lookup table, like so:
Create a table with two columns:
DATE_VALUE date
DATE_STRING varchar(10)
Load data into this table for valid dates (generate a file in your favorite tool, excel, unix, whatever). There can even be more than one row per DATE_VALUE (different "valid" formats) if all you use this for is this check. If you fill in from, say, 1900 to 2100, as long as your data is within that range, you'll be fine. And it's a small table, too, for ~200 years only ~7300 rows. Add more if needed. Heck, since the NZ date datatype goes from AD1 to AD 9999, you could fill it completely with only 3.4 million rows (small for NZ).
Then, to isolate rows that have invalid dates, just use a JOIN or an EXISTS / NOT EXISTS to this table, on DATE_STRING. Since the table is so small, netezza will likely broadcast it to all SPUs, making the performance impact trivial.
Netezza Analytics Package 3.0 (free download) comes with a couple LUA functions that verify date values: isdate() and todate(). Very simple to install / compile.
I have a project that I am working on that requires me to delete records from the database if they are atleast 3 years old.
I have something like this in DB2 SQL to get the date:
SELECT * FROM tableA
WHERE ADD_DATE < CHAR(CURRENT DATE-3 YEARS)
ADD_DATE is stored as Characters in my system, this is why I am converting
I know it is also possible to get the date and format it in VB.net which is the language I am using to call the SQL statements.
My question is whether it would be faster/better to get the date and perform the conversion inside the SELECT in SQL or would it be better to get the current date and convert it in VB.net and then use that date in the SQL statement. I'm thinking VB.net would be better because there are thousands of records that must be compared. I should be able to set it up in VB so that it only retrieves the date and converts it once but I am not sure what kind of performance hit each takes from these statements.
Thanks in advance.
If all you are doing with a call to the database would be getting the date, then it would be faster to get it client-side and avoid the round-trip to the database.
If you do it server side and you're comparing your date in a single set-based operation then the time difference for that is negligible. If you do the check in something loop-based (a cursor or something) then you'll be wasting time.
It doesn't sound like this is applicable to you, but for future reference be sure to take into consideration the possibility of the client and the database server being in different timezones. It could be safer to do it one way or the other based on the time zone your data is generated for.
Doing a "Now" in VB.Net will definitely be faster than hitting the database.