An NSDate object represents an absolute date and time, e.g. September 4, 2012, 10:00PM CDT. While this works fine when an event did indeed happen at a certain moment in time, it's much more of a hassle to work with NSDate when you're dealing with something that's a recurring event. For example, I'm currently working on an app that stores the hours of operation of businesses. Most businesses have a weekly schedule, which means that I would like to store the times per weekday, regardless of the date.
There are several solutions: create an extra entity (I'm working with Core Data), Hours, with attributes weekday, hour, and minute and figure it out that way. This could work for simple displaying, but I'm also going to add a "status" (such as "open until x", "closing in y minutes", or "will open at z"). This means I'll either have to create NSDate objects to do the comparing, or I take the weekday, hour, and minute properties of the current NSDate.
Another option would be to store two NSDates per business (open and close), and ignore the actual date and only use the weekday, hour, and minute properties. I've tried this, but to be able to compare dates, I'd still have to manipulate NSDate objects.
These are the solutions I've come up with. Both require a lot of math and involve a bunch of ifs, buts, and maybes. It would be really easy to simply have some sort of "NSTime" object with which I can do everything, but that doesn't (seem to) exist.
Has anyone else had the same problems and found a better solution?
I think you're better off creating your own abstractions. That will better fit with the problem you're trying to solve. Some pointers for help:
Fowler's recurring events for calendars (pdf) patterns.
ice_cube: A ruby library for recurring events (for the design idea).
It would be really easy to simply have some sort of "NSTime" object
with which I can do everything, but that doesn't (seem to) exist.
One option is to use NSDateComponents, in which you can store just the parts of a date that you're interested in, like hours, minutes, and seconds.
Since you really just want to store a time of day, another option is to create your own Time class. NSDate stores moments in time as a single number: the number of seconds since a fixed time, the epoch. Your Time class could do nearly the same thing, except that it would use midnight as the reference point. You may run into problems, though, if you're not able to indicate times beyond the end of the day. For example, if a restaurant stays open until 2am, you might want to be able to represent that relative to the day when the restaurant opened. Perhaps a better option is to have your Time class use NSDate internally, but always with a fixed starting date.
Related
So I learned how to code in SQL about 2 months ago, so I'm still pretty new and still learning different commands/functions each day. I have been tasked with migrating some queries from Teradata to Redshift and there are obviously some differing syntax. Now I have been able to replace most of them, but I am stuck on a command "SYS_CALENDAR". Can someone explain to me how SYS_CALENDAR works so I could potentially hard code it or does anyone know any suitable replacements that run within AWS Redshift?
Thanks
As someone who has ported a large Teradata solution to Redshift let me say good luck. These are very different systems and porting the SQL to achieve functional equivalence is only the first challenge. I'm happy to have an exchange on what these challenges will likely be if you like but first off your question.
SYS_CALENDAR in Teradata is a system view that can be used like a normal view that holds information about every date. This can be queried or joined as needed to get, for example, the day-of-week or week-of-year information about a date. It really performs a date calculation function base on OS information but is used like a view.
No equivalent view exists in Redshift and this creates some porting difficulties. Many create "DATES" tables in Redshift to hold the information they need for dates across some range and there are web pages on making such a table (ex. https://elliotchance.medium.com/building-a-date-dimension-table-in-redshift-6474a7130658). Just pre-calculate all the date information you need for the range of dates in your database and you can swap this into queries when porting. This is the simplest route to take for porting and is the one that many choose (sometimes wrongly).
The issue with this route is that a user supported DATES table is often a time bomb waiting to go off and technical debt for the solution. This table only has the dates you specify at creation and the range of dates often expands over time. When it is used with a date that isn't in the DATES table wrong answers are created, data is corrupted, and it is usually silent. Not good. Some create processes to expand the date range but again this is based on some "expectation" of how the table will be used. It is also a real table with ever expanding data that is frequently used causing potential query performance issues and isn't really needed - a performance tax for all time.
The better long-term answer is to use the native Redshift (Postgres) date functions to operate on the dates as you need. Doing this uses the OS's understanding of dates (without bound) and does what Teradata does with the system view (calculate the needed information). For example you can get the work-week of a date by using the DATE_PART() function instead of joining with the SYS_CALENDAR view. This approach doesn't have the downsides of the DATES table but does come with porting cost. The structure of queries need to change (remove joins and add functions) which takes more work and requires understanding of the original query. Unfortunately time, work, and understanding are things that are often in short supply when porting databases which is why the DATES table approach is often seen and lives forever as technical debt.
I assume that this port is large in nature and if so my recommendation is this - lay out these trade offs for the stakeholders. If they cannot absorb the time to convert the queries (likely) propose the DATES table approach but have the technical debt clearly documented along with the "end date" at which functionality will break. I'd pick a somewhat close date, like 2025, so that some action will need to be on the long-term plans. Have triggers documented as to when action is needed.
This will not be the first of these "technical debt" issues that come up in a port such as this. There are too many places where "get it done" will trump "do it right". You haven't even scratch the surface on performance issues - these are very different databases and data solutions tuned, over time, for Teradata will not perform optimally on Redshift based on a simple port. This isn't an "all is lost" level issue. Just get the choices documented along with the long-term implications of those choices. Have triggers (dates or performance measures) defined for when aspects of the "port" will need to be followed up with an "optimization" effort. Management likes to forget about the need for follow-up on these efforts so get these documented.
I am trying to create an interactive table which can display work shifts for a given week. I wanted the table to look something like this.
I've managed to recreate something like this using VB.net, however I have done it in (what I consider to be) a ridiculous way. I have used a TableLayoutPanel with a column for each day and a row for each hour of the day. This works, looks okay, and also allows editing of shifts. (I.e, you click in a cell to turn it green and add it to a shift).
The problems with this solution are (a) there is no easy way to represent half hours, or quarter hour segments easily and (b) the fact that each cell contains a blank label which then gets coloured in, the form takes a long time (~ 6 seconds) to load, which seems unnecessary to me. What would be a better way to implement something similar to this? Preferably without using the VB reporting, since I can't seem to get that to work, though that's a separate issue.
Thanks.
I'm working on building a simple API to consume data sent from small network-connected sensors/devices (think arduino, raspberry pi, etc). I want to log a reasonably accurate timestamp of when an event occurred on this remote device. Due to potential connectivity issues, the event might not always get sent back to the server right away. I don't want to rely on synchronizing a clock on the device if I can avoid it, so I'm going to try sending back a parameter that just contains the number of seconds since the event occurred. So, for example, an event is detected on the device, but for some reason it gets sent to the server 5 seconds later. The data would include a number "5" signifying that this happened 5 seconds ago based on the device's internal clock. The server then would take it's own clock-time, and subtract 5 seconds to generate the timestamp.
I'd like to come up with a parameter name that describes this time span that makes sense. Some options may include:
TimeSince
TimeAgo
DurationSince
However since this is a simple numeric field, I want the name to include the unit of measure for extra clarity, such as:
SecondsSince
SecondsAgo
TimeAgoSeconds
Has anyone come across common and/or sensible naming conventions for this kind of thing? Time since an event, and additionally, where and how to indicate units in a parameter name? None of my naming ideas really feel "right" but perhaps some discussion here might help identify one approach as being better than another.
Thanks.
My feeling is that ElapsedSeconds sounds reasonable.
Now, are you buffering those events? If they are meant to happening "every n seconds", how do you handle a missed value? I mean, you can buffer for n events, if they are not transmitted than you probably would overwrite the first of those n. When you transmit all your buffer how do you account for the missing one? Depending on the application it would be interest to fill a NaN for the event on that moment.
Does this make sense?
I am developing a calendar application that needs to draw rectangles whose heights and vertical position are based on the start date of the event they represent. I am trying to test the layout system against dates and time zones with daylight saving. Specifically I want to account for the fact that in some regions daylight saving can remove/add an hour to the day.
Currently I'm stumped on how to write unit tests against daylight saving time.
See the NSTimeZone class reference. It has a handy BOOL property called -daylightSavingTime and related friends you may find useful. You can create a date with a specific time zone / date combination to get what you need and feed that instead of the system-provided time.
I'm not sure there's a way to change the system time (even in the simulator) programmatically, however. I haven't attempted anything like this but perhaps a creative use of some preprocessor macros and/or environment variables could let you toggle between test states.
I'm working with a legacy database which due to poor management and design has had a wildgrowth of columns which never have been or are no longer beeing used.
Is it possible to some how query for column usage? As in how often a column is beeing selected (either specifically or with *, or joined on)?
Seems to me like this is something we should be able to somehow retrieve but i have been unable to find anything like this.
Greetings,
F.B. ten Kate
Unfortunately, this analysis on the DB side isn't really going to be a full answer. I've seen a LOT of instances where application code only needed 3 columns of a 10+ column table, but selected them all anyway.
Your column would still show up on a usage report in any sort of trace or profiling you did, but it still may not ACTUALLY be in use.
You might have to either a) analyze the entire collection of apps that use this website or b) start drafting the a return-on-investment style doc on whether it's worth rebuilding.
This article will give you a good idea of how to search all fixed code (prodedures, views, functions and triggers) for the columns that are used. The code in the article searches for a specific table/column combination. You could easily adapt it to run for all columns. For anything dynamically executed, you'd probably have to set up a profiler trace.
Even if you could determine whether a column had been used in the past X period of time, would that be good enough? There may be some obscure program out there that populates a column once a week, a month, a year; or once every time they click the mystery button that no one ever clicks, or to log the report that only Fred in accounting ever runs (he quit two years ago), or that gets logged to if that one rare bug happens (during daylight savings time, perhaps?)
My point is, the only way you can truly be certain that a column is absolutely not used by anything is to review everything -- every call, every line of code, every ad hoc Excel data dump, every possible contingency -- everything that references the database . As this may be all but unachievable, try to get a formally defined group of programs and procedures that must be supported, bend over backwards to make sure they are supported, and be prepared to fix things when some overlooked or forgotten piece of functionality turns up.