I have an vb.net app that handles directory service attributes. I have to display the attribute values. To get the values I use LDAP.
Microsoft's Active Directory has the syntax (or type) LARGE_INTEGER / INTEGER8. I saw various LDAP-Browsers that display this type of attribute as DateTime. But Microsoft's documentation says that this syntax (or type) is a 64-bit signed integer value.
My question: Does the schema definition provide an information where I can detect that an attribute with the LARGE_INTEGER syntax should be handled as DateTime or not?
Here is an example:
lastLogoff -> DateTime
msExchVersion -> No DateTime
Both attributes have the same syntax.
Thank you for helping!
Yes. The LARGE_INTEGER thing is an abstraction at the ADSI layer. If you look at the docs for the lastLogoff attribute, for example (http://msdn.microsoft.com/en-us/library/ms676822(v=vs.85).aspx), you'll see the actual AD syntax is Interval. You can grab the syntax for a given attribute off the attribute definition in the schema.
Regarding to the following post it seems that there is no way to see whether a LARGE_INTEGER attribute should be handled as DateTime or not :\
Same datatype Storage but different representation in AD (UsnChanged and LastLogon)
Related
Is it possible to create variables to be a specific type in Lua?
E.g. int x = 4
If this is not possible, is there at least some way to have a fake "type" shown before the variable so that anyone reading the code will know what type the variable is supposed to be?
E.g. function addInt(int x=4, int y=5), but x/y could still be any type of variable? I find it much easier to type the variable's type before it rather than putting a comment at above the function to let any readers know what type of variable it is supposed to be.
The sole reason I'm asking isn't to limit the variable to a specific data type, but simply to have the ability to put a data type before the variable, whether it does anything or not, to let the reader know what type of variable that it is supposed to be without getting an error.
You can do this using comments:
local x = 4 -- int
function addInt(x --[[int]],
y --[[int]] )
You can make the syntax a = int(5) from your other comment work using the following:
function int(a) return a end
function string(a) return a end
function dictionary(a) return a end
a = int(5)
b = string "hello, world!"
c = dictionary({foo = "hey"})
Still, this doesn't really offer any benefits over a comment.
The only way I can think of to do this, would be by creating a custom type in C.
Lua Integer type
No. But I understand your goal is to improve understanding when reading and writing functions calls.
Stating the expected data type of parameters adds only a little in terms of giving a specification for the function. Also, some function parameters are polymorphic, accepting a specific value, or a function or table from which to obtain the value for a context in which the function operates. See string.gsub, for example.
When reading a function call, the only thing known at the call site is the name of the variable or field whose value is being invoked as a function (sometimes read as the "name" of the function) and the expressions being passed as actual parameters. It is sometimes helpful to refactor parameter expressions into named local variables to add to the readability.
When writing a function call, the name of the function is key. The names of the formal parameters are also helpful. But still, names (like types) do not comprise much of a specification. The most help comes from embedded structured documentation used in conjunction with an IDE that infers the context of a name and performs content assistance and presentations of available documentation.
luadoc is one such a system of documentation. You can write luadoc for function you declare.
Eclipse Koneki LDT is one such an IDE. Due to the dynamic nature of Lua, it is a difficult problem so LDT is not always as helpful as one would like. (To be clear, LDT does not use luadoc; It evolved its own embedded documentation system.)
I'm trying to write a upnp/dlna client for videos and I would like to allow the option to sort by title and date.
With Windows7/wmp as the server, I can use "dc:title" or "dc:date" for sorting and it seems to work but testers have told me it doesn't work on other servers. Is there a universal way to know if sorting is allowed and what the sorting criteria should be?
Thanks.
There is a way to query this (but be prepared for broken implementations that lie about their abilities as well). Quoting ContentDirectory service spec (v3):
2.3.3
SortCapabilities
This state variable is a CSV list of property names that the ContentDirectory service can use to sort
Search() or Browse() action results. An empty string indicates that the device does not support any kind of
sorting. A wildcard (“*”) indicates that the device supports sorting using all property names supported by
the ContentDirectory service. The property names returned MUST include the appropriate namespace
prefixes, except for the DIDL-Lite namespace. Properties in the DIDL-Lite namespace MUST always be
returned without the prefix. All property names MUST be fully qualified using the double colon (“::”)
syntax as defined in Section 2.2.20, “property”. For example,
“upnp:foreignMetadata::fmBody::fmURI”
Using VB.net, the following snippet gives the error below.
Dim _account = Account.Find(Function(x As Account) x.AccountName = txtFilterAccountName.Text)
or similarly if I do
.SingleOrDefault (Function(x As Account) x.AccountName = txtFilterAccountName.Text)
will both give the error "The method 'CompareString' is not supported". If I make the same call searching for an integer (ID field) it works fine.
.SingleOrDefault (Function(x As Account) x.Id = 12)
So integer matching is fine but strings don't work Is this a problem with the VB.net templates?
No this is not a problem with Vb.Net templates.
The problem is that you are not using a normal LINQ provider. Based on your tag (subsonic) I'm guessing you're using a LINQ to SQL query.
The problem is that under the hood, this is trying to turn your code into an expression tree which is then translated into an SQL like query. Your project settings are turning your string comparison into a call in the VB runtime. Specifically, Microsoft.VisualBasic.CompilerServices.Operators.CompareString.
The LINQ2SQL generater in question or VB compiler (can't remember where this check is done off the top of my head) does not understand how to translate this to an equivalent bit of SQL. Hence it generates an error. You need to use a string comparison function which is supported by LINQ2SQL.
EDIT Update
It looks like the CompareString operator should be supported in the Linq2SQL case. Does subsonic have a different provider which does not support this translation?
http://msdn.microsoft.com/en-us/library/bb399342.aspx
The problem is with SubSonic3's SQL generator and the expression tree generated from VB.NET.
VB.NET generates a different expression tree as noted by JaredPar and SubSonic3 doesn't account for it - see Issue 66.
I have implemented the fix as described but it has yet to merge into the main branch of SubSonic3.
BlackMael's fix has been committed:
http://github.com/subsonic/SubSonic-3.0/commit/d25c8a730a9971656e6d3c3d17ce9ca393655f50
The fix solved my issue which was similar to John Granade's above.
Thanks to all involved.
I notice in the MSDN documentation that there are multiple ways to declare a reference to a function in an external DLL from within a VB.NET program.
The confusing thing is that MSDN claims that you can only use the DllImportAttribute class with Shared Function prototypes "in rare cases", but I couldn't find the explanation for this statement, while you can simply use the Declare keyword instead.
Why are these different, and where would I appropriately use each case?
Apparently the Declare and DllImport statements are basically the same. You can use whichever you prefer.
Following is a discussion of the few points that may work a little differently in each, which may influence a preference for one over the other:
I started with an article from MSDN regarding Visual Studio 2003 titled Using the DllImport Attribute. (A bit old, but since the DllImport statement seems to have originated in .NET, it seemed appropriate to go back to the beginning.)
Given an example DllImport statement of:
[DllImport("user32.dll", EntryPoint = "MessageBox", CharSet = Unicode)]
int MessageBox(void* hWnd, wchar_t* lpText, wchar_t* lpCaption, unsigned int uType);
It says that if the EntryPoint value is left out, the CLR will look for the name of the function (MessageBox, in this case) as a default. However, in this instance, since a CharSet of Unicode was specified, the CLR would FIRST look for a function called "MessageBoxW" - the 'W' indicating a Unicode return type. (The ANSI return type version would be "MessageBoxA".) If no "MessageBoxW" were found, THEN the CLR would look for an API function actually called "MessageBox".
Current specifics about the DllImportAttribute class can be found here, where I viewed the .NET Framework 4 version: DLLImportAttribute Class
A key comment in the Remarks section of this .NET Framework 4 page is that:
You apply this attribute directly to C# and C++ method definitions; however, the Visual Basic compiler emits this attribute when you use the Declare statement.
So, in VB.NET, using the Declare statement causes the compiler to generate a DLLImportAttribute.
There is also an important note in this page:
The DllImportAttribute does not support marshaling of generic types.
So, it would appear that if you want to use a generic type, you'd have to use a Declare statement.
Next, I headed to the Declare statement information. A Visual Studio 2010 version (Visual Basic statement info) was here: Declare Statement
A key item here was this note:
You can use Declare only at module level. This means the declaration context for an external reference must be a class, structure, or module, and cannot be a source file, namespace, interface, procedure, or block.
Apparently, if you want to set up an API call outside of a class, structure, or module, you'll have to use the DllImport statement instead of the Declare.
The example Declare statement on this page is:
Declare Function getUserName Lib "advapi32.dll" Alias "GetUserNameA" (
ByVal lpBuffer As String, ByRef nSize As Integer) As Integer
Following that example is this little tidbit of information:
The DllImportAttribute provides an alternative way of using functions in unmanaged code. The following example declares an imported function without using a Declare statement.
followed by, of course, an example of DllImport usage.
Regarding Unicode vs ANSI results, according to this Declare page, if you specify a CharSet value (available in Declare, but not shown in the example above) the CLR will do the same type of automatic name search that DllImport does - for either Unicode or ANSI.
If you do not specify a CharSet value in the Declare statement, then you must make sure that your function name in the Declare is the same as the function name in the actual API function's header file, OR you must specifiy an Alias value that matches the actual function name in the header file (as shown in the example above).
I was not able to find any specific Microsoft documentation stating that either DllImport or Declare were preferred, or even recommended, over one another in any situation other than those noted above.
My conclusion, therefore, is:
Unless you need to place your definition in one of the places a Declare statement cannot be used, either technique will work fine,
and
if you're using DllImport, make sure you specify the CharSet value you want (Unicode or ANSI), or you may get unexpected results.
Declare is really an attempt to maintain a P/Invoke syntax which would be more familiar to Visual Basic 6.0 users converting to VB.NET. It has many of the same features as P/Invoke but the marshalling of certain types, in particular strings, are very different and can cause a bit of confusion to people more familiar with DllImport rules.
I'm not entirely sure what the documentation is alluding to with the "rare" distinction. I use DllImport in my code frequently from both VB.NET and C# without issue.
In general, I would use DllImport over Declare unless you come from a Visual Basic 6.0 background. The documentation and samples for DllImport are much better and there are many tools aimed at generating DllImport declarations.
In my opinion, since this keyword doesn't look deprected, etc. from what I searched, simply use compile-time keywords rather than attributes.
Also, when you use the Declare, you don't need to write the End Function. The advantage of that is that you can create a whole module of declarations of function imports line by line, with no need to pulute your code with DllImports and End Functions.
When you declare using the Declare keyword, the compiler treats this function as Shared anyway, so it can be accessed via other extenal objects.
But I think in the current VB.NET they're both addressed to the same target and no performance difference - no warranty on this one.
So my conclusion is: Do use the Declare instead of DllImport, especially reading what you quoted that Microsoft stated that it should be used in rare cases.
If you need to set one of the following options, then use DllImportAttribute attribute, else use Declare. From https://msdn.microsoft.com/en-us/library/w4byd5y4.aspx
To apply the BestFitMapping, CallingConvention, ExactSpelling,
PreserveSig, SetLastError, or ThrowOnUnmappableChar fields to a
Microsoft Visual Basic 2005 declaration, you must use the
DllImportAttribute attribute instead of the Declare statement.
It is unclear from the above reference only whether this applies to only "Visual Basic 2005" or not, as the above reference is from a .NET 4.5 article. However, I also found this article (https://msdn.microsoft.com/en-us/library/system.runtime.interopservices.dllimportattribute(v=vs.110).aspx ) which is specific to the DllImportAttribute class in .NET 4.5 :
the Visual Basic compiler emits this attribute when you use the
Declare statement. For complex method definitions that include
BestFitMapping, CallingConvention, ExactSpelling, PreserveSig,
SetLastError, or ThrowOnUnmappableChar fields, you apply this
attribute directly to Visual Basic method definitions.
This tells you that the Declare option is VB.net syntactical sugar which is converted to DllImportAttribute at compile time, and outlines the exact scenarios when using DllImportAttribute directly is recommended.
Consider the following line of code:
private void DoThis() {
int i = 5;
var repo = new ReportsRepository<RptCriteriaHint>();
// This does NOT work
var query1 = repo.Find(x => x.CriteriaTypeID == i).ToList<RptCriteriaHint>();
// This DOES work
var query1 = repo.Find(x => x.CriteriaTypeID == 5).ToList<RptCriteriaHint>();
}
So when I hardwire an actual number into the lambda function, it works fine. When I use a captured variable into the expression it comes back with the following error:
No mapping exists from object type
ReportBuilder.Reporter+<>c__DisplayClass0
to a known managed provider native
type.
Why? How can I fix it?
Technically, the correct way to fix this is for the framework that is accepting the expression tree from your lambda to evaluate the i reference; in other words, it's a LINQ framework limitation for some specific framework. What it is currently trying to do is interpret the i as a member access on some type known to it (the provider) from the database. Because of the way lambda variable capture works, the i local variable is actually a field on a hidden class, the one with the funny name, that the provider doesn't recognize.
So, it's a framework problem.
If you really must get by, you could construct the expression manually, like this:
ParameterExpression x = Expression.Parameter(typeof(RptCriteriaHint), "x");
var query = repo.Find(
Expression.Lambda<Func<RptCriteriaHint,bool>>(
Expression.Equal(
Expression.MakeMemberAccess(
x,
typeof(RptCriteriaHint).GetProperty("CriteriaTypeID")),
Expression.Constant(i)),
x)).ToList();
... but that's just masochism.
Your comment on this entry prompts me to explain further.
Lambdas are convertible into one of two types: a delegate with the correct signature, or an Expression<TDelegate> of the correct signature. LINQ to external databases (as opposed to any kind of in-memory query) works using the second kind of conversion.
The compiler converts lambda expressions into expression trees, roughly speaking, by:
The syntax tree is parsed by the compiler - this happens for all code.
The syntax tree is rewritten after taking into account variable capture. Capturing variables is just like in a normal delegate or lambda - so display classes get created, and captured locals get moved into them (this is the same behaviour as variable capture in C# 2.0 anonymous delegates).
The new syntax tree is converted into a series of calls to the Expression class so that, at runtime, an object tree is created that faithfully represents the parsed text.
LINQ to external data sources is supposed to take this expression tree and interpret it for its semantic content, and interpret symbolic expressions inside the tree as either referring to things specific to its context (e.g. columns in the DB), or immediate values to convert. Usually, System.Reflection is used to look for framework-specific attributes to guide this conversion.
However, it looks like SubSonic is not properly treating symbolic references that it cannot find domain-specific correspondences for; rather than evaluating the symbolic references, it's just punting. Thus, it's a SubSonic problem.