Year 2038 bug, can I deviate from the OpenPGP RFC? - rfc

The RFC 4880, a document that describes the OpenPGP cryptography standard, finds its roots in RFC 2440, published in 1998 (that's sixteen years ago, supposedly before 64-bit systems emerged). Both specifications tell the same thing about how timestamps are handled:
3.5. Time Fields
A time field is an unsigned four-octet number containing the number
of seconds elapsed since midnight, 1 January 1970 UTC.
Should one try to follow this RFC as closely as possible (and, here, face a sweet year 2038 bug one day)? Is it "risky" for a developer not to follow parts of standards/specifications/RFCs (especially when it comes to cryptography), when they are seen as potentially obsolete already?
I am a bit afraid of asking because the question sounds silly, but if I "implement RFC 4880" but in my own way, it is not the official thing any more. So, what is the best thing a developer should do against what she sees as "obsolete" parts of specifications? Nothing?

First: I think the example in your question is wrong. RFC4880 uses an unsigned 32-bit integer. The y2k38 problem is a problem for signed 32-bit integers. According to Wikipedia, unsigned 32-bit integers work until the year 2106. A little more time.
To answer your question:
I think the best way is to get in contact with the RFC working group / the authors of the RFC and tell them about the obsolescence. Maybe, a follow-up RFC will fix that issue.
For your example, I think you can refrain from contacting the OpenPGP WG. I think, until 2106 there will be a lot of updates and I suspect V5 keys to have 8-octet time fields.

Related

Provably correct marshalling?

Does there exist serialization schemes (marshalling) of data structures that can be formally proven to be correct ?
I am agnostic to the particular programming language, could be ocaml/haskell or cpp, java or other as long as the data to be serialized can be assumed to be properly typed.
Perhaps as a way to reformulate/clarify my question what i am interested is whether there exists known standard encoding schemes to write data structures to disk which can proved to have 100% fidelity in the sense that the deserialized data is exactly the same as the original one.
As a simplifying assumption i can assume there is no complication of pointers/references. The input is “pure data” for a lack of a better way to say it.
It's a slightly vague question, but I'll have a go.
Heterogenous Environments
Serialisation's job is to take data in the memory of one computer program, convert it to some sort of standardised representation, and convert that back to data in the memory of, quite possibly, another computer program on a completely different sort of computer. That does open up some interesting possiblities.
For example, the representation of a floating point value on many computers is IEEE754. But it's not wholly universal; historically companies like Cray and IBM used alternate formats, and so there exists the possibility that a value when deserialised on those machines might not be exactly the same value as was serialised in the first place. Generally no one cares, because the differences are numerically very small.
This shows up in some serialisation technologies; ASN.1's own wireformats for floats are either a text representation, or it's own binary format that is not IEEE754. The idea behind the text representation is that it can convey any floating point value, with no constraints. In contrast a binary format often has limits in the precision, maximum value, etc.
Text is another potential problem area; serialised unicode strings sent to another computer that doesn't support unicode will likely result in the deserialised string being different to the original.
Similarly with platforms that don't support 64bit integers, etc. Java is very annoying - historically it had no unsigned integers, so handling 64bit unsigned values received from, say, a C++ program is a nuisance.
Conclusion - It's a Logical Impossibility
So in some senses, for heterogenous environments, there can be no serialiatsion technologies formally proven to reproduce identical values because the destination machine is of a different architecture, and its representation may well be different, or limited in someway.
Homogenous Environments
Serialisation used to convey data from a computer program on one computer to exactly the same program on an identical computer (i.e. a homogenous environment) ought to produce exactly the same values on deserialisation. AFAIK there's no formally proven serialisation technologies. If there's serialisation built into the Ada language (I don't know) the Greenhills Ada compiler is formally proven. Boost for C++ is heavily peer reviewed, so that comes close, especially if used on top of Greenhill's formally proven C++ compiler, and has a serialisation library. Some of the commercial ASN.1 tools / libraries are very mature and highly trusted.
What is it that is Formally Proven?
In that last para I have touched on a difficulty with your question; formal proof is perhaps only of value if the entire software development stack (libraries, compiler, CPU) and you application source code are themselves formally proven. Otherwise you could have perfect source code for a serialisation library being compiled by a junk compiler, linked against rubbish libraries, running on a shonky CPU; it's not going to work.
So, when one is talking about "formally proven" one is generally talking about the whole system, not just an individual component. A component part that is, by itself, formally proven to meet its specification is a good aid to achieving a proven system, but it does not magically confer "correctness" on the whole system all on its own. Every other component needs to meet its specification too.
And what we've seen historically is that, quite often, CPUs don't really do what their data sheet says they do. Some will take shortcuts in floating point arithmetic in the interests of completing instructions in a single cycle in preference to achieving a numerically perfect result.
Sorry for the rambling answer, but I hope that's of interest and help.

Authoritative SQL standard documentation

I'm curious to know some more details about the various SQL standard's, i.e. SQL-92, SQL:99, SQL:2003, SQL:2008 etc. There is a short and useful overview on Wikipedia, with links to very expensive documents. Why are those documents not open to public? Can I find some open and free information?
Please, don't post links you found from Google. I'm interested in somewhat authoritative documentation only.
Quoting from one of my web sites:
We all love open source software. Wouldn’t it be great if
international standard documents such as the SQL standard would be
open too?
As a matter of fact: they are!
However, they are not free—just public. Very much like open source
software is not necessarily free. Too often, we neglect these
differences. Just because we have to pay for the standard doesn't mean
it is secret.
A download of the most relevant part of the SQL standard—part 2—is
available for USD 60 at ANSI. A CD with all parts on it can be bought
from ISO for CHF 352. Not free, but affordable.
You mentioned in some comments that you are mostly interested in part 2, so spending USD 60 might be your best option.
If you just need to know about the syntax up to 2003, there are two great free resources:
BNF grammar of SQL-92, SQL:1999 and SQL:2003: http://www.savage.net.au/SQL/
Online validator for SQL:1999: https://developer.mimer.com/services/sql-validator-99/
Finally, the complete text of “SQL-99 Complete, Really” is available at the MariaDB knowledge base. However, this book was written in 1999 when no database actually supported the described features. Keep that in mind when using this resource.
Other answers also mentioned "free" copies of the standards available on the web. Yes there are—those are mostly draft versions. I can't tell which of them are legal, so I rather not link them.
Finally a little self ad: I've just launched http://modern-sql.com/ to explain the standard in an easily accessible way to developers. Note that the actual standards text is written like laws are written :) Depending on your background, that might anyway not what you want.
The Postgresql Developer FAQ maintains links to each of them:
http://wiki.postgresql.org/wiki/Developer_FAQ#Where_can_I_get_a_copy_of_the_SQL_standards.3F
There are some hyperlinked versions of 92, 99 and 2003 here
However, I've never been able to use them effectively (read: I gave up).
This 92 text is useful (and is quoted here on SO several times)
ISO/IEC 9075-1:2011 -- google that.
Actually, digging around I found
http://www.incits.org/standards-information/
and it has freely availble section that clicks to something that redirects to here:
http://standards.iso.org/ittf/PubliclyAvailableStandards/index.html
And finally the standards.
You have to accept a license agreement to download a pdf.
However, from what I have read in my pursuit - the RDMS well the 'RD' part is going the way of the dinosaur.. If you are building something new (therefore want the new standards) you may want to reconsider all options.
You don't have to pay for all of the standards. SQL-92 is freely available, for instance.

What were the (then) unpublished optimizations that Steve Yegge referred to in "Dynamic Languages Strike Back"?

I was reading the transcription of Steve Yegge's Dynamic Languages Strike Back presentation, when I noticed this comment when he begins to discuss trace trees:
I'll be honest with you, I actually have two optimizations that couldn't go into this talk that are even cooler than this because they haven't published yet. And I didn't want to let the cat out of the bag before they published. So this is actually just the tip of the iceberg.
What are the optimizations he was referring to?
Update
Several days ago, I asked this question in a comment on the article. However, comment moderation is turned on (for good reasons), so it hasn't appeared yet.
Update
It has been a couple weeks since I first tried to reach the author. Does anyone else know another way to contact him?
Take a look at this: https://blog.stackoverflow.com/2009/04/podcast-50/
EDIT: Difficult to find specific (confirmed) references however, this paper perhaps gives some information regarding this: http://people.mozilla.org/~dmandelin/tracemonkey-pldi-09.pdf
and this blog post which appears related: http://andreasgal.wordpress.com/2008/08/22/tracing-the-web/
Might not be related as it is a Microsoft research paper from March 2010: http://research.microsoft.com/pubs/121449/techreport2.pdf
Pure speculative on my part but it appears (at least to me) that there are two major forms of performance, that at the developer level (IDE) and that at the compiler level which this subject of trace trees addresses hence the "continuous optomization" during execution to get the trace inline for the hot spots. This then leads me quickly to areas of optomization related to multi-cores and how to utilize the trace tree somehow in that regard (multi-core environments). Interesting stuff considering the currently theoretical non-static type speed speculation as compared to the speed winners in static type utilized in current C and the performance potential to be gained. I recall a discussion I had with a hardware engineer years ago (1979) where we speculated that if we could just capture the 'hot' execution paths we could get a huge gain in performance by keeping it "ready to run" in situ somehow - this was way prior to the work at HP in this regard (1999?) and unfortunatly we did not get further than the discussion stage due to other commitments. (I am rambling here I think...:)
OR, was this just related to the GO language? hard to tell in some respects.
You can watch that video from youtube under the StanfordUniversity channel: http://www.youtube.com/watch?v=tz-Bb-D6teE
You can add comments there too. Maybe someone will come to your rescue.

adding trial with time limitation in vb.net

how can i add a trial with random serials or single serial and once it get registered expire it after 6-12 months....and also if user change its clock time to some day back it remains expire.....
Read the articles posted in the comment by #JoelCoehoorn first.
If you really want to pursue this after reading the articles, I believe that .NET Licensing might hold an answer to you.
http://windowsclient.net/articles/Licensing.aspx
You would essentially set up your main form as a licensed class.

What's the technical reason behind the 2010 bank card problems in Germany?

It's been in the news (1) (2), but there's been no technical explanation, besides that it is a software bug on the chip.
Is there any further information on what kind of bug this is? A one-off bug, some number conversion problem or ...?
EDIT: Apparently the bug can be circumvented by modifing the terminals' software. I'd be nice to know, how this is done.
A similar problem what happened with SMS's received by some windows mobile phones. They appeared to come from 2016. This probably had to do with the interpretation of BCD numbers as hexadecimal.
This results in interpreting BCD 10 as decimal 16 instead of decimal 10
maybe something similar happened here.
My guess is that we're simply seeing the results of management cutting costs on development and testing. There is probably just a simple little bug at the bottom of everything, and it escaped QA.