For (non regression) testing purpose, I frequently need to make DB2 LUW return a "fake" current date.
This is of course due to application code that rely on the current date / timestamp , and which will behave differently when run during a different date.
We have the possibility to change the operating system (Linux for instance) date , since the testing environment are isolated and dedicated per tester.
Unfortunately, this doesn't help a lot, since we face at least two problems :
1) Binding programs (Cobol) when the system date goes backward gives errors (tables not found, ...)
2) Functions created after the 'past' system date are also not usable....
For the point 1), we can set the date to present , then bind, then come back to past,
But for point 2), I didn't find a workaround.
Does anybody have experience on this problem ? Any alternatives including free or proprietary software are welcome.
Many years ago, our shop purchased a proprietary utility to assist in testing Year-2000-related program changes.
The software allowed us to specify an arbitrary "current" date and time in our test JCL, using parameters "ALTDATE" and "ALTTIME":
//STEP1 EXEC PGM=MYPGM,ALTDATE=MM/DD/YYYY,ALTTIME=HH.MM
Program calls to system date routines, like COBOL "ACCEPT . . . FROM DATE" or DB2 "CURRENT TIMESTAMP", would then return values based on a "fake" system clock that started with the specified date and time.
I believe the product also supported use of a simulated clock in CICS regions for testing of on-line applications, but I could be mistaken. Unfortunately, management decided to stop renewing the product license some time after Y2K had passed, even though several of us developers found it to be very useful for testing date-sensitive logic.
Although I do not know the name of the product we used, a Google search turns up one called "Simulate 2000" by DTS Software that appears to have identical functionality.
Related
So I learned how to code in SQL about 2 months ago, so I'm still pretty new and still learning different commands/functions each day. I have been tasked with migrating some queries from Teradata to Redshift and there are obviously some differing syntax. Now I have been able to replace most of them, but I am stuck on a command "SYS_CALENDAR". Can someone explain to me how SYS_CALENDAR works so I could potentially hard code it or does anyone know any suitable replacements that run within AWS Redshift?
Thanks
As someone who has ported a large Teradata solution to Redshift let me say good luck. These are very different systems and porting the SQL to achieve functional equivalence is only the first challenge. I'm happy to have an exchange on what these challenges will likely be if you like but first off your question.
SYS_CALENDAR in Teradata is a system view that can be used like a normal view that holds information about every date. This can be queried or joined as needed to get, for example, the day-of-week or week-of-year information about a date. It really performs a date calculation function base on OS information but is used like a view.
No equivalent view exists in Redshift and this creates some porting difficulties. Many create "DATES" tables in Redshift to hold the information they need for dates across some range and there are web pages on making such a table (ex. https://elliotchance.medium.com/building-a-date-dimension-table-in-redshift-6474a7130658). Just pre-calculate all the date information you need for the range of dates in your database and you can swap this into queries when porting. This is the simplest route to take for porting and is the one that many choose (sometimes wrongly).
The issue with this route is that a user supported DATES table is often a time bomb waiting to go off and technical debt for the solution. This table only has the dates you specify at creation and the range of dates often expands over time. When it is used with a date that isn't in the DATES table wrong answers are created, data is corrupted, and it is usually silent. Not good. Some create processes to expand the date range but again this is based on some "expectation" of how the table will be used. It is also a real table with ever expanding data that is frequently used causing potential query performance issues and isn't really needed - a performance tax for all time.
The better long-term answer is to use the native Redshift (Postgres) date functions to operate on the dates as you need. Doing this uses the OS's understanding of dates (without bound) and does what Teradata does with the system view (calculate the needed information). For example you can get the work-week of a date by using the DATE_PART() function instead of joining with the SYS_CALENDAR view. This approach doesn't have the downsides of the DATES table but does come with porting cost. The structure of queries need to change (remove joins and add functions) which takes more work and requires understanding of the original query. Unfortunately time, work, and understanding are things that are often in short supply when porting databases which is why the DATES table approach is often seen and lives forever as technical debt.
I assume that this port is large in nature and if so my recommendation is this - lay out these trade offs for the stakeholders. If they cannot absorb the time to convert the queries (likely) propose the DATES table approach but have the technical debt clearly documented along with the "end date" at which functionality will break. I'd pick a somewhat close date, like 2025, so that some action will need to be on the long-term plans. Have triggers documented as to when action is needed.
This will not be the first of these "technical debt" issues that come up in a port such as this. There are too many places where "get it done" will trump "do it right". You haven't even scratch the surface on performance issues - these are very different databases and data solutions tuned, over time, for Teradata will not perform optimally on Redshift based on a simple port. This isn't an "all is lost" level issue. Just get the choices documented along with the long-term implications of those choices. Have triggers (dates or performance measures) defined for when aspects of the "port" will need to be followed up with an "optimization" effort. Management likes to forget about the need for follow-up on these efforts so get these documented.
I am planning to migrate Oracle 11g to MS SQL Server 2016,
hence I performed Pre-Migration assessment through SSMA.
I received final conversion report on SSMA, But with Numerous Number of Errors.the report states that it will require 1263.6hours for manual conversion from Oracle to SQL
Please help me, how shall I resolve these errors with minimum manual conversion time.
Attached is the screenshot for the same.
Appreciate your Help
Thanks,
Amit
enter image description here
You must understand one important concept. Migrating from SQL Server to Oracle or vice versa would be as easier as the level of complexity in your source database. In your case, you are using SSMA to make an assessment for your Oracle source database.
Try to read --> rules on SSMA to migrate from Oracle to get more details about the rules applied for each possible transformation.
There is not a concrete answer to your question, you have a lot of different problems in the screenshot you have provided. Most important, even though SSMA will make automatic conversions ( for example Schemas ), you need to evaluate the impact in your application. I saw also problems in PL/SQL objects, that you somehow will need to convert to Transact SQL. Bottom line: you have a lot of manual work to do.
SSMA is making already a case in your situation providing some information that some of the source elements cannot be converted automatically, therefore a manual intervention is needed.
As you are discovering, migrating from one DB server to another is not just a simple matter of relocating the data, especially when (as it appears) your application leans heavily on Oracle-specific technology. By all appearances - as noted repeatedly in this thread - you will have a lot of manual reconciliation and rewriting of application code to do. There is a hidden cost here: the time and effort required to do this work doesn't come for free. It will cost your company real $$$ to make this happen. You should be prepared to answer the following questions:
Is cost of the amount of time and effort to rewrite the application and complete the transformation going to be less than any cost savings realized by switching from Oracle to SQL Server?
Understanding that it may cost more in the short term to rewrite the application than continuing with the status quo, how long will it take to realize any cost savings at all?
On a technical level, given the number of Oracle technologies in play (custom types, stored procedures, etc.), can SQL Server even replicate the functionality required by the application that is currently provided by Oracle?
What is the driving force behind this migration, and does it really make sense if this level of effort is required?
If data migration is still required, is it easier to rebuild the application from scratch and just move the data than it would be to port the entire existing application?
I am implementing a bitemporal solution for a few of our tables, using the native temporal table features, and some custom columns and code to handle the application/valid time.
However, I just stumbled across a reference to something which is supposedly in the SQL:2011 standard:
From wikipedia:
As of December 2011, ISO/IEC 9075, Database Language SQL:2011 Part 2:
SQL/Foundation included clauses in table definitions to define
"application-time period tables" (valid time tables),
"system-versioned tables" (transaction time tables) and
"system-versioned application-time period tables" (bitemporal tables)
This pdf actually has code to do this (application-time):
CREATE TABLE Emp(
ENo INTEGER,
EStart DATE,
EEnd DATE,
EDept INTEGER,
PERIOD FOR EPeriod (EStart, EEnd)
)
This code will not run in SSMS. Has something changed that makes this invalid SQL now? It looks like what used to be undocumented support for application-time/bitemporal tables has now been removed?
Just because it's in the standard doesn't mean it's in any particular implementation. Each vendor has a stretch goal of full standard coverage, but not one of them is there yet, and I doubt it will happen in my lifetime.
Currently SQL Server supports system time, but it does not support application time. There may be another vendor who does; I'm not sure, as I don't follow all the various RDBMS platforms as they mature. I know it's on the SQL Server radar but there have been no formal announcements to date.
The example in the PDF is just that: an example of what could be done by a platform that supports application time. The next example is this...
INSERT INTO Emp
VALUES (22217,
DATE ‘2010-01-01’,
DATE '2011-11-12', 3)
...which also isn't valid in SQL Server for more than one reason, and violates a few best practices to boot. Maybe this stuff is all valid in DB2, as you suggest, but the standard is not supposed to be vendor-specific. I mean, by definition, if nothing else.
IBM DB2 supports what you are asking about. Think of the SQL standard as a definition of the recommended way a vendor should expose a feature if they support it, well at least after SQL 92, which is kind of a core. In the history of SQL dialects, sometimes vendors get ahead of the standard and dialects diverge. A vendor would be kind of foolish to implement a feature in a non-standard way after it has been standardized, but sometimes they do. Hot on the left, cold on the right; that is a standard. It works the other way around, but people tend to get burned.
In this case, it looks like IBM decided to implement the feature and make their way of implementing it part of the standard in one fell swoop. Microsoft has not yet decided it is worth their trouble.
I am facing a new task in my job and I need to find out how to generate and administer test data. Googling led to a lot of information about specific test data generation like filling a database with random data or camouflaged production data, generating files, generating test data with multi-objective genetic algorithms to minimize test data and optimize coverage, etc.
But my task is somehow harder, because the environment is not only one database, it's a heterogeneous environment, which evolved over time, consisting of databases, files, different servers, programs, etc. Time shall also be simulated by the files aging and so on.
I am somehow lost here and need some starting points from where I can dig further into the materia.
Do you know any tools, knowledge sources, websites, books, experiential reports or something else considering the topic "Evolving testing environments"?
Sounds like a daunting environment; I'd suggest using a "divide and conquer" approach to identify all the test data variables. Make a list of each element of the environment needs to be varied under test, e.g.
Database type
File age
File size
Server operating system
Programs running on the server
(I'm just guessing at the different elements here based on your question). Then, for each element, make a list of values for it, e.g.
Database type: Oracle, MySQL, PostGreSQL
Server operating system: Windows Server 2003, Windows Server 2008, Fedora 12 Linux
When you're done with that, figure out which values are most important to test; for example; you might want to prioritize Oracle if 80% of your customers use Oracle.
Finally, you should have a set of values for the different environment elements that you can use to create test environments by using different combinations of the element values, using the most important ones first.
I'm working with a legacy database which due to poor management and design has had a wildgrowth of columns which never have been or are no longer beeing used.
Is it possible to some how query for column usage? As in how often a column is beeing selected (either specifically or with *, or joined on)?
Seems to me like this is something we should be able to somehow retrieve but i have been unable to find anything like this.
Greetings,
F.B. ten Kate
Unfortunately, this analysis on the DB side isn't really going to be a full answer. I've seen a LOT of instances where application code only needed 3 columns of a 10+ column table, but selected them all anyway.
Your column would still show up on a usage report in any sort of trace or profiling you did, but it still may not ACTUALLY be in use.
You might have to either a) analyze the entire collection of apps that use this website or b) start drafting the a return-on-investment style doc on whether it's worth rebuilding.
This article will give you a good idea of how to search all fixed code (prodedures, views, functions and triggers) for the columns that are used. The code in the article searches for a specific table/column combination. You could easily adapt it to run for all columns. For anything dynamically executed, you'd probably have to set up a profiler trace.
Even if you could determine whether a column had been used in the past X period of time, would that be good enough? There may be some obscure program out there that populates a column once a week, a month, a year; or once every time they click the mystery button that no one ever clicks, or to log the report that only Fred in accounting ever runs (he quit two years ago), or that gets logged to if that one rare bug happens (during daylight savings time, perhaps?)
My point is, the only way you can truly be certain that a column is absolutely not used by anything is to review everything -- every call, every line of code, every ad hoc Excel data dump, every possible contingency -- everything that references the database . As this may be all but unachievable, try to get a formally defined group of programs and procedures that must be supported, bend over backwards to make sure they are supported, and be prepared to fix things when some overlooked or forgotten piece of functionality turns up.