I'm working on building a simple API to consume data sent from small network-connected sensors/devices (think arduino, raspberry pi, etc). I want to log a reasonably accurate timestamp of when an event occurred on this remote device. Due to potential connectivity issues, the event might not always get sent back to the server right away. I don't want to rely on synchronizing a clock on the device if I can avoid it, so I'm going to try sending back a parameter that just contains the number of seconds since the event occurred. So, for example, an event is detected on the device, but for some reason it gets sent to the server 5 seconds later. The data would include a number "5" signifying that this happened 5 seconds ago based on the device's internal clock. The server then would take it's own clock-time, and subtract 5 seconds to generate the timestamp.
I'd like to come up with a parameter name that describes this time span that makes sense. Some options may include:
TimeSince
TimeAgo
DurationSince
However since this is a simple numeric field, I want the name to include the unit of measure for extra clarity, such as:
SecondsSince
SecondsAgo
TimeAgoSeconds
Has anyone come across common and/or sensible naming conventions for this kind of thing? Time since an event, and additionally, where and how to indicate units in a parameter name? None of my naming ideas really feel "right" but perhaps some discussion here might help identify one approach as being better than another.
Thanks.
My feeling is that ElapsedSeconds sounds reasonable.
Now, are you buffering those events? If they are meant to happening "every n seconds", how do you handle a missed value? I mean, you can buffer for n events, if they are not transmitted than you probably would overwrite the first of those n. When you transmit all your buffer how do you account for the missing one? Depending on the application it would be interest to fill a NaN for the event on that moment.
Does this make sense?
Related
Suppose my process uses epoll with Edge Trigger, and the following
scenario takes place:
call epoll_wait, succeeds with one fd ready for reading.
while recv() succeeds, keep reading all data
recv() return EWOULDBLOCK
More data comes in now
Go to step #1
Will epoll_wait() return immediately? Or wait till next data is incoming?
This is actually answered in the epoll(7) manual page (in the section "Level-triggered and edge-triggered").
What the manual says is that it should work fine, since the EPOLLET event is triggered by a change and that change happens in your step 4.
The manual page even says that the way to solve problems using EPOLLET is
with nonblocking file descriptors; and
by waiting for an event only after read(2) or write(2) return EAGAIN.
Which is what you already do (even though you use the equivalent EWOULDBLOCK instead of EAGAIN).
In short: When you iterate back to step 1 then epoll_wait should return immediately.
It will return immediately.
Unluckily, edge-triggered mode is not very well described (not in any way so a normal person could understand it and couldn't make wrong assumptions, anyway, in my opinion).
What it does is, it generates exactly one event when the state of the descriptor changes to "ready" (or, whatever you wait on), which means that epoll_wait() will return exactly once. However, that's not the end of the story.
It then generates the next event once the status has flipped to "not ready" and back. This is very important to note. Data coming in alone is generally not enough (although according to the docs it may be, and incoming data may generate more than one event... bleh).
For anything like a socket or pipe or such, the subtlety doesn't matter because you're going to read the data anyway. It's kinda obvious, too, and if you think about it, it makes perfect sense as well.
The emphasis on flipping once to false and back to true is important for example when your descriptor is an eventfd. There's the assumption that you can just signal the eventfd whenever you need to wake a waiter (and it will not wake a waiting thread more than once, which is nice). Sure, that's how it has to be, not anything different. Nobody cares about the value you posted either, huh... it can store only one value which is overwritten, and I couldn't care less about reading it.
Well, this is not how things work, as I found out the hard way. After waking once, you need to actually read the descriptor to flip the state to "not ready" before waiting again. Otherwise, surprise, the fact that another event came in since you last woke is just irrelevant.
Again, if you think about it, this even makes sense. It only may not be what you expect.
The way your program is written, it will work as expected. Do note, however, that given one descriptor only, it's more efficient to just block.
I’ve seen Objective C code using a collection of practices, raging from passing a pointer of NSError for execution finish status - to using NSAssert - to implementing #throw - to relaying on delegate for status code returned in the callback - to the old c method of returning a boolean/int indicating with 1 being success and co.
I can’t identify a consistent pattern for how should I be handling errors happening in my app running on client devices. For ex, what would you recommend handling for the following cases:
Client attempt to access a network resource, network resource timed out / returned 500?
Unexpected state that should have not even happened reached in logical code section?
Attempt to write to disk failed? (Out of disk space, not permission and code)
Coming from Java, server side practices exceptions the weapon of choice, using Objective C and C is seems that exceptions exist but are not encouraged. NSAssert seems harsh, as it will crash the application, which in most cases is not the optimal solution. So, I’d appreciate a Best practices advice.
Exceptions are used to indicate programmer error and/or non-recoverable errors only. Exceptions should not be used for flow control. NSAssert is more of a development tool. Use NSError for recoverable, user addressable (or caused) errors.
https://developer.apple.com/library/ios/documentation/Cocoa/Conceptual/ErrorHandlingCocoa/ErrorHandling/ErrorHandling.html
First we have to agree upon the terminology. I think there are two types of bad situations in a program: Errors and Exceptions.
Errors An unwanted situation that can happen but you can recover from it with the program memory being in a consistent state and further execution can continue.
eg1. Client attempt to access a network resource, network resource timed out / returned 500?
eg2. Attempt to write to disk failed? (Out of disk space, not permission and code)
Approach Handle the error the way you want it handled. Show user message, log it to file/console, report it to your back-end if you think it needs to be reported.
Exceptions A state that should never have been reached and may corrupt the memory state of the app in a way that prevents further execution.
This could happen at two levels.
Level 1 A logic error that you have overlooked and is now causing the exception.
eg1. accessing invalid array elements i.e. index out of bound exception.
In the above example, you unconsciously made a programming mistake. You couldn't have handled it because you overlooked it. This will crash the application.
Approach Your approach here should be to identify the bug and fix it. In the process some end users will be affected which is the case with the most meticulous software.
Level 2
As you are programming, you come across a situation that your business logic doesn’t permit.
eg1. Unexpected state that should have not even happened reached in logical code section?
eg2. A database entry that you always expect to be present as per your business logic. For example, let's say you are making an app that requires a currency table with currency names, unicode symbols for currencies and country names. The value from this table are to be shown as a drop down in your UI. You may create a table upon first launch of the app and insert the values or may be you ship the table in the bundle but you always expect it to have the values you are inserting. You make an sqlite select query->sqlite execution succeeds->but no values are returned. Business logic demands that values should be present but for some mysterious reason they are absent.
Here is where most confusion stems from because sometimes you can recover from some of these situations by treating them as errors and displaying a message to the end user. This approach is wrong. This is an exception at business logic level. One that you should prevent from happening.
Approach You should force crash the app here as well using NSAsserts both in development and production mode. This is an aggressive approach and some end users will be affected, but I have read experts advising to adopt an aggressive approach to finding bugs and remedying them rather than pretending they don’t exist or thinking you have smartly covered them all when all you have done is masking exceptions as errors through your “handling tactics”. I believe the experts are right. Coming from Java, you may find it a very objCeee.. way of doing things.
Nils and Bools You should understand that they are just facilitators to reach at the conclusion of whether you treat something as an error or exception. They are not the end in themselves. While writing your methods with return value, you should define what a nil or bool(no) signifies and document your intent on top of the method. Sometimes it would be an error, sometimes it would be an exception, sometimes it could be a normal expected result. While dealing with apple frameworks/third party libraries you should understand what their intent is when they return such values and determine how you want to treat them in your code.
Some other tips
Don’t use #try/catch because experts say so. If you wish to challenge them assuming that Apple has put it there for a reason, Apple is always right, experts(including Apple employees themselves) have got it wrong after all these years, you can go ahead and do it.
Formalize your approach to this->stick to it->document your intent. Don’t think it over and over. Because at some stage error/exceptions/nils/bools would cross the limits of propriety and reach the precincts of taste. You will be better off settling this once and for all and freeing up your time.
I've got a legacy system that processes extremely complex data that's changing every second. The modularity of the system is quite poor so I can't split the business logic into smaller modules to ease functional testing.
The actual test system is: "close your eyes click and pray", which is not acceptable at all. I want to be confident on the changes we commit on the code.
What are the test good practices, the bibles to read, the changes to operate, to increase confidence in such a system.
The question is not about unit testing, the system wasn't designed for that and it takes too much time to decouple, mock and stub all the dependencies and most of all, we sadly don't have the time and budget for that. I don't want to a philosophic debate about functional testing: I want facts that work in real life.
It sounds like you have yourself a black box as regards testing.
http://en.wikipedia.org/wiki/Black-box_testing
To put it simply, it's horrible, but may be all you can do if you can't isolate the system in any way.
You need to insert known data into your system and compare the result with the known output.
You really need known data & output for
normal values - normal data - you'll find out that it can at least seem to do the right thing
erroneous values - spelling errors, invalid values - so you know that it will tell you if the input is rubbish
out of range - -1 on signed integers, values greater than 2.7 billion (assuming 32bit), and so on - so you know it won't crash out on seriously mis-inputted or corrupted data
dangerous - input that would break the SQL, simulate SQL injection
Lastly make sure that all errors are being carefully handled rather than getting logged and the bad/corrupt/null value getting passed on through the system.
Any processes you can isolate and test that way will make debugging easier, as black box testing can't tell you where the error occurred. This means then you need to diagnose the errors based on what happened, more in the style of House MD than a normal debugging session.
Once you have the different data types listed above, you can test all changes in isolation with them, and then in the system as a whole. Over time as you eventually touch most aspect of the system, you'll have test cases for all areas, and be able to say where failure was most likely to have occurred more easily.
Also: make sure you put tracers in your known data so you don't accidentally indicate a stockmarket crash when you're testing the range limits on a module, so you can take it out of the result flow before it ends up on a CEO's desk.
I hope that's some help
http://www.amazon.com/Working-Effectively-Legacy-Michael-Feathers/dp/0131177052 seems to be the book for these situations.
I'm working with a legacy database which due to poor management and design has had a wildgrowth of columns which never have been or are no longer beeing used.
Is it possible to some how query for column usage? As in how often a column is beeing selected (either specifically or with *, or joined on)?
Seems to me like this is something we should be able to somehow retrieve but i have been unable to find anything like this.
Greetings,
F.B. ten Kate
Unfortunately, this analysis on the DB side isn't really going to be a full answer. I've seen a LOT of instances where application code only needed 3 columns of a 10+ column table, but selected them all anyway.
Your column would still show up on a usage report in any sort of trace or profiling you did, but it still may not ACTUALLY be in use.
You might have to either a) analyze the entire collection of apps that use this website or b) start drafting the a return-on-investment style doc on whether it's worth rebuilding.
This article will give you a good idea of how to search all fixed code (prodedures, views, functions and triggers) for the columns that are used. The code in the article searches for a specific table/column combination. You could easily adapt it to run for all columns. For anything dynamically executed, you'd probably have to set up a profiler trace.
Even if you could determine whether a column had been used in the past X period of time, would that be good enough? There may be some obscure program out there that populates a column once a week, a month, a year; or once every time they click the mystery button that no one ever clicks, or to log the report that only Fred in accounting ever runs (he quit two years ago), or that gets logged to if that one rare bug happens (during daylight savings time, perhaps?)
My point is, the only way you can truly be certain that a column is absolutely not used by anything is to review everything -- every call, every line of code, every ad hoc Excel data dump, every possible contingency -- everything that references the database . As this may be all but unachievable, try to get a formally defined group of programs and procedures that must be supported, bend over backwards to make sure they are supported, and be prepared to fix things when some overlooked or forgotten piece of functionality turns up.
Let's say I am making a Use Case about filling a quiz. You have only 5 minutes to fill that quiz. When doing the Use Case for "Filling the Quiz", how should I signal there is a time limit and that after that the Use Case is finished? I simply write it by text or is there anything more formal to use?
Sketch of what my Use Case can be:
1. The Actor tells the System he's ready to start the quiz.
2. The System presents the Actor with the first question of the Quiz and its 4 possible answers and tells him how much time he has left.
3. The Actor tells the System what is his chosen answer (a number between 1 and 4).
Repeat steps 2-3 until there are no questions left.
4. The System registers the results of the quiz.
I could just put operations between all those shown above to check whenever the time left is over, but there is probably a better way to show this.
Thanks
You can use an alternative flow timeout case, like
Alternative Flow 1: Timeout
2. The System detects that ...
To be a bit more precise, but agree with the other answer in general, for this situations Alistair Cockburn (Writing effective Use Cases) recommends using extending use cases (alternate flows), which would have in their extension points the time limit. In textual form, you can easily use the range of numbers of lines of scenario lines, where the timeout can happen.