C# DateTime Serialization with Microsoft Bond - bond

I was replacing internal Serializations in my application from Jil to Bond.
I'm switching simple classes with Ms Bond attributes and everything worked fine until I got one with a DateTime.
I had then a Dictionary KeyNotFound Exception error during Serialization.
I suspect Bond do not support DateTime, is that so?
And if it is, why is not implemented? DateTime is not a basic type but adding a custom converter is not worth it, the speed gain vs protobuf-net is minimal and I don't need generics, just simple fast de/serializer.
I hope I'm missing something, I really want to use Bond, but I need an easy tool too, I cannot risk breaking the application because something basic like a Date or Guid is not supported by default.
I'm writing here after hours of researches and the Young Guide to C# bond does not clearly mention what is and what is not supported.

No, there is no built-in timestamp type in Bond. The built-in types in Bond are documented in the manual for the gbc compiler.
For GUIDs, there's Bond.GUID, which has implicit conversions to/from System.Guid. Note that Bond.GUID lives in bond.bond, so if you want to refer to this from a .bond file, you'll need to use Bond's import functionality and import "bond/core/bond.bond"
There's an example showing how to use DateTime with a custom type alias.
The reason there is no built-in timestamp type in Bond is that there are so many different ways (and standards) for representing timestamps. There's a similar C++ example that shows representing time with a boost::posix_time::ptime, highlighting the various different ways that time is represented.
Our experience has been that projects usually already have a representation for timestamps that they want to use, so, we recommend using a converter so that you can use the representation that's appropriate for your circumstances.
As a side note, my experience has been that DateTimeOffset is a more generally useful type, compared to DateTime.

Related

Creating domains for every "logical" domain

I have a databases class in which the prof wants us to create domains for every type, even when these just end up being aliases to other types. For example, instead of using the default DATE type, we would create out own type depending on what kind of day it is (eg, OrderDate).
I'm wondering if this is common or a best practice.
I can think of some pros and cons to this approach. A pro is that it makes it clear exactly what the domain is intended for, and typically we'd only compare fields if they have the same domain and any other comparison is something to watch for (since it could be comparing apples to oranges). But as a con, this also makes it more confusing to work with the types, as we'd have to refer to the domain declaration to figure out what kind of type a column really is (not that we need to do this too often).
This is not a particularly common practice. For instance, I have worked on many databases over the years and I have never used such substitutions for base types.
In your example, for instance, an order date may be an order date. But, I might want to know the how long ago that was in the past -- this requires "mixing" types because the current date (sysdate? now()? getdate()? CURRENT_TIMESTAMP?) is not an OrderDate. Or I might want to know how long after the order the first complaint or first return was made. Even if the conversion is invisible and automatic, why introduce incompatible types?
Another issue is that different databases differ in their support for user-defined data types. So, code using user defined types would likely make code more difficult to port to a different database. Why limit future options?
There are some occasional uses for user defined types do have a place for particular new types that might be needed -- complex numbers and points perhaps. There may even be some situations in some databases where a user defined type on a base type is useful -- for instance, to get represent a telephone number consistently. Using them liberally as substitutes for built-in types? It seems like overkill, complicating the code, hampering some important queries, and limiting future portability options.

Julia: how stable are serialize() / deserialize()

I am considering the use of serialize() and deserialize() for all of my data i/o due to their convenience. I do not, however, want to be stuck with unreadable files on a Julia update.
How stable are serialize() and deserialize()? Should they work between updates of 0.3? Can I expect safe behavior if I stick to basic types like arrays of Float64?
Thank you.
If you want to store data you might depend on being able to read in the future, you should not use a format that will incorporate breaking changes if/when someone finds it useful. As far as I understand the default serialization format is for network communications, so it is designed for maximum performance.
There is also the HDF5.jl package that uses a documented format and a common library that has wrappers for different languages.
I believe the official answer here is, "people will try not to break the serialization format, but you shouldn't depend upon on it."

vb.net bigger than decimal data type

I am using VB.net and I want to make some cryptographic computations with keys length 1024bit (128bytes). I do not want to use a know algorithm thus I cannot use the Security library.
The biggest data type in vb.net is decimal (16bytes).
how can I do those computations? is there a different data type that I am not aware of?
You may like to check System.Numerics.BigInteger in .NET 4.0.
Also you may find it interesting to check Large Number Calculations in VB.NET
It would be instructive, even if you're not going to use them, to inspect the existing classes in the System.Security.Cryptography namespace.
You'll note that most of the methods that need to deal with keys, blocks, etc, are specced in terms of byte[]. A byte[] can be as big as you want/need it to be.
(Insert usual warnings about rolling your own crypto code)

Naming of types when using an ORM

I am working on a project where a number of types have the suffix "Instance".
For example, we have the concept of tabs in the application, so we have a TabInstance type.
To me this seems redundant and even confusing / wrong, as there is already the concept of an instance in OO terminology.
The system uses nHibernate as an ORM - I wonder if such a naming scheme is something that is typical in systems using ORMs, or is used or some other reason?
For example, we have the concept of
tabs in the application, so we have a TabInstance type.
It seems as if you should have the type Tab instead of TabInstance.
yes get rid of them.

Should primitive datatypes be capitalized?

If you were to invent a new language, do you think primitive datatypes should be capitalized, like Int, Float, Double, String to be consistent with standard class naming conventions? Why or why not?
By "primitive" I don't mean that they can't be (or behave like) objects. I guess I should have said "basic" datatypes.
If I were to invent a new language, it wouldn't have primitive data types, just wrapper objects. I've done enough wrapper-to-primitive-to-wrapper conversions in Java to last me the rest of my life.
As for capitalization? I'd go with case-sensitive first letter capitalized, partly because it's a convention that's ingrained in my brain, and partly to convey the fact that hey, these are objects too.
Case insensitivity leads to some crazy internationalization stuff; think umlauts, tildes, etc. It makes the compiler harder and allows the programmer freedoms that don't result in better code. Seriously, you think there's enough arguments over where to put braces in C... just watch.
As far as primitives looking like classes... only if you can subclass primitives. Don't assume everyone capitalizes class names; the C++ standard libraries do not.
Personally, I'd like a language that has, for example, two integer types:
int: Whatever integer type is fastest on the platform, and
int(bits): An integer with the given number of bits.
You can typedef whatever you need from that. Then maybe I could get a fixed(w,f) type (number of bits to left and right of decimal, respectively) and a float(m,e). And uint and ufixed for unsigned. (Anyone who wants an unsigned float can beg.) And standardize how bit fields are packed into structures. If the compiler can't handle a particular number of bits, it should say so and abort.
Why, yes, I program embedded systems and got sick of int and long changing size every couple years, how could you tell? ^_-
(Warning: MASSIVE post. If you want my final answer to this question, skip to the bottom section, where I answer it. If you do, and you think I'm spouting a load of bull, please read the rest before trying to argue with my "bull.")
If I were to make a programming language, here are a few caveats:
The type system would be more or less Perl 6 (but I totally came up with the idea first :P) - dynamically and weakly typed, with a stronger (I'm thinking Haskellian) type system that can be imposed on top of it.
There would be a minimal number of language keywords. Everything else would be reassignable first-class objects (types, functions, so on).
It will be a very high level language, like Perl / Python / Ruby / Haskell / Lisp / whatever is fashionable today. It will probably be interpreted, but I won't rule out compilation.
If any of those (rather important) design decisions don't apply to your ideal language (and they may very well not), then my following (apparently controversial) decision won't work for you. If you're not me, it may not work for you either. I think it fits my language, because it's my language. You should think about your language and how you want your language to be so that you, like Dennis Ritchie or Guido van Rossum or Larry Wall, can grow up to make bad design decisions and defend them in retrospect with good arguments.
Now then, I would still maintain that, in my language, identifiers would be case insensitive, and this would include variables, functions (which would be variables), types (which would also be variables, both built-in/primitive (which would be subclass-able) and user-defined), you name it.
To address issues as they come:
Naming consistency is the best argument I've seen, but I disagree. First off, allowing two different types called int and Int is ridiculous. The fact that Java has int and Integer is almost as ridiculous as the fact that neither of them allow arbitrary-precision. (Disclaimer: I've become a big fan of the word "ridiculous" lately.)
Normally I would be a fan of allowing people to shoot themselves in the foot with things like two different objects called int and Int if they want to, but here it's an issue of laziness, and of the old multiple-word-variable-name argument.
My personal take on the issue of underscore_case vs. MixedCase vs. camelCase is that they're both ugly and less readable and if at all possible you should only use a single word. In an ideal world, all code should be stored in your source control in an agreed-upon format (the style that most of the team uses) and the team's dissenters should have hooks in their VCS to convert all checked out code from that style to their style and vice versa for checking back in, but we don't live in that world.
It bothers me for some reason when I have to continually write MixedCaseVariableOrClassNames a lot more than it bothers me to write underscore_separated_variable_or_class_names. Even TimeOfDay and time_of_day might be the same identifier because they're conceptually the same thing, but I'm a bit hesitant to make that leap, if only because it's an unusual rule (internal underscores are removed in variable names). On one hand, it could end the debate between the two styles, but on the other hand it could just annoy people.
So my final decision is based on two parts, which are both highly subjective:
If I make a name others must use that's likely to be exported to another namespace, I'll probably name it as simply and clearly as I can. I usually won't use many words, and I'll use as much lowercase as I can get away with. sizedint doesn't strike me as much better or worse than sized_int or SizedInt (which, as far as examples of camelCase go, looks particularly bad because of the dI IMHO), so I'd go with that. If you like camelCase (and many people do), you can use it. If you like underscores, you're out of luck, but if you really need to you can write sized_int = sizedint and go on with life.
If someone else wrote it, and wanted to use sized_int, I can live with that. If they wrote it and used SizedInt, I don't have to stick with their annoying-to-type camelCase and, in my code, can freely write it as sizedint.
Saying that consistency helps us remember what things mean is silly. Do you speak english or English? Both, because they're the same word, and you recognize them as the same word. I think e.e. cummings was on to something, and we probably shouldn't have different cases at all, but I can't exactly rewrite most human and computer languages out there on a whim. All I can do is say, "Why are you making such a fuss about case when it says the same thing either way?" and implement this attitude in my own language.
Throwaway variables in functions (i.e. Person person = /* something */) is a pretty good argument, but I disagree that people would do Person thePerson (or Person aPerson). I personally tend to just do Person p anyway.
I'm not much fond of capitalizing type names (or much of anything) in the first place, and if it's enough of a throwaway variable to declare it undescriptively as Person person, then you won't lose much information with Person p. And anyone who says "non-descriptive one-letter variable names are bad" shouldn't be using non-descriptive many-letter variable names either, like Person person.
Variables should follow sane scoping rules (like C and Perl, unlike Python - flame war starts here guys!), so conflicts in simple names used locally (like p) should never arise.
As to making the implementation barf if you use two variables with the same names differing only in case, that's a good idea, but no. If someone makes library X that defines the type XMLparser and someone else makes library Y that defines the type XMLParser, and I want to write an abstraction layer that provides the same interface for many XML parsers including the two types, I'm pretty boned. Even with namespaces, this still becomes prohibitively annoying to pull off.
Internationalization issues have been brought up. Distinguishing between capital and lowercase umlautted U's will be no easier in my interpreter/compiler (probably the former) than in my source code.
If a language has a string type (i.e. the language isn't C) and the string type supports Unicode (i.e. the language isn't Ruby - it's only a joke, don't crucify me), then the language already provides a way to convert Unicode strings to and from lowercase, like Perl's lc() function (sometimes) and Python's unicode.lower() method. This function must be built into the language somewhere and can handle Unicode.
Calling this function during an interpreter's compile-time rather than its runtime is simple. For a compiler it's only marginally harder, because you'll still have to implement this kind of functionality anyway, so including it in the compiler is no harder than including it in the runtime library. If you're writing the compiler in the language itself (and you should be), and the functionality is built into the language, you'll have no problems.
To answer your question, no. I don't think we should be capitalizing anything, period. It's annoying to type (to me) and allowing case differences creates (or allows) unnecessary confusion between capitalized and lowercased things, or camelCased and under_scored things, or other sets of semantically-distinct-but-conceptually-identical things. If the distinction is entirely semantic, let's not bother with it at all.