Access Compiler Constants in code - vb.net

Is it possible to use a constant defined by the compiler in code like below?
#If DEALER_DEBUG = "ID12345" Then
If(Dealer.ID = DEALER_DEBUG) Then
'Do something
End If
#End If
I'm running batch processes and I'm experiencing problems with one of my customer's data. I want to add special code for only that customer, but I want to keep the code there so I can easily switch the customer ID in the future should i need to debug a different customer.
The source-code of the compiled dll will then look like this:
If(Dealer.ID = "ID12345") Then
'Do something
End If

No. Compiler directives are just that, directives to the compiler. They are not included in the generated IL code, and so cannot be accessed at runtime.

You can use it in the compile time #If, but you can't use it in the runtime If
You can define your custom compiler constants in the project properties under Compile->Advanced Compile Options->Custom Constants, or alternately by using a #Const directive.
There are many better ways to do this. I don't know much about what exactly you are attempting to accomplish, but you might want to consider some sort of factory pattern + plugins + config that allows you to provide a plugin assembly to that client which can allow for extra functionality. It is arguably a lot more work to create an extensible app, but if you find yourself needing to do these kinds of things, its much better to write it to be extensible from the start than have to go back and refactor later.

Related

How to mock module attributes and other modules in Elixir?

I'm very new to elixir so this is probably basic but couldn't see much online
if I have the following code
defmodule A do
def my_first_function do
# does stuff
end
end
defmodule B do
#my_module_attribute A.my_first_function()
end
then how do I mock module A in my tests so I can just tell the tests what I want it to return?
also is it possible to just mock #my_module_attribute instead? would be good to have an answer to both approaches (and then what is considered the better pattern)
I think perhaps you are wanting to use module attributes for more than what they are useful for, and perhaps there is some confusion over the exact definition of "mock".
Avoid thinking of module attributes as "class variables" -- even though they look like they might serve the same purpose and they are in the same place, their behavior is different. Be especially careful when a module attribute relies on a function call. One common pitfall is to use module attributes to store values read from configuration, e.g. using Application.fetch_env!/2. The problem is that the module attributes are evaluated at compile time (not at run time), so you can easily end up with an unexpected value. (The compiler now warns about this gotcha explicitly and there is now the Application.compile_env!/2 provided to better communicate that particular use case).
I usually reserve module attributes for raising the visibility of simple constants and I tend to avoid using them for storing the result of any function execution.
When it comes back down to "mocking", you still have to think about the fact that Elixir is compiled -- it's not Javascript. Someone geekier than me can explain the mechanics, but the module attributes don't exist the same way at runtime as they do at compile time.
"Mocking" during tests usually means substituting one module or function for another, and this swap is often easier to do at runtime. One common pattern looks a bit like Dependency Injection (but purists may object to the comparison).
Consider a function like this that relies on some OtherModule to do its work:
def my_function(input, opts \\ []) do
other_module = Keyword.get(opts, :service, OtherModule)
other_module.get_thing(input)
# do more stuff...
end
Instead of hard-coding the OtherModule inside of my_function, its name is read out of the optional opts. This provides a useful way to override the output of that OtherModule when you need to test it.
In your test, you can do something like this:
test "example mock" do
assert something = MyApp.MyMod.my_function("foo", service: MockService)
end
When the test provides MockService as the :service module, the get_thing function will be called on the MockService module and not on the OtherModule. If the provided module does not define a get_thing function, the code will fail to execute. This is where having a behaviour comes in handy because it helps guarantee that your module has implemented the needed functions. The Mox testing library, for example, relies on the behaviour contracts, but from the example above you can see premise.
If you squint, you can see that this "injection" approach is somewhat similar to how Javascript often accepts a callback function as an argument, but in Elixir it is more common to pass around module names instead of captured functions.

ClassImp preprocessor macro in ROOT - Is it really needed?

Do I really have to use the ClassImp macro to benefit the automatic dictionary and streamer generation in ROOT? Some online tutorials and examples mention it but I noticed that simply adding the ClassDef(MyClass, <ver>) macro to MyClass.h and processing it with rootcint/rootcling already generates most of such code.
I did look at Rtypes.h where these macros are defined but to follow preprocessor macros calling each other is not easy and so, it would be nice if experts could confirm the role of ClassImp. I am specifically interested in recent versions of ROOT >= 5.34
Here is the answer I got on roottalk mailing list confirming that the usage of ClassImp is essentially outdated.
ClassImp is used to register in the TClass the name of the source file
for the class. This was used in particular by THtml (which has now
been deprecated in favor of Doxygen). So unless you code/framework
needs to know the name of the source files, it is no longer necessary
to have ClassImp.
ClassDef is necessary for class inheriting from TObject (or from any
classes that has a ClassDef). In the other cases, it provide
accelerator that makes the I/O slightly faster (and thus is
technically not compulsory in this case). It also assign a version
number to the schema layout which simplifies writing schema evolution
rules (on the other hand, there is other alternative to assign a
version number to the schema layout).
What exactly are you trying to do? The ClassImp and ClassDef macros add members to the class that provide Run-Time Type Information and allow the class to be written to root files. If you are not interested in that, then don't bother with these macros.
I never use them.

keep around a piece of context built during compile-time for later use in runtime?

I'm aware this might be a broad question (there's no specific code for you to look at), but I'm hoping I'd get some insights as to what to do, or how to approach the problem.
To keep things simple, suppose the compiler that I'm writing performs these three steps:
parse (and bind all variables)
typecheck
codegen
Also the language that I'm building the compiler for wants to support late-analysis/late-binding (ie., it has a function that takes a String, which is to be compiled and executed as a piece of source-code during runtime).
Now during parse-phase, I have a piece of context that I need to keep around till run-time for the sole benefit of the aforementioned function (because it needs to parse and typecheck its argument in that context).
So the question, how should I do this? What do other compilers do?
Should I just serialise the context object to disk (codegen for it) and resurrect it during run-time or something?
Thanks
Yes, you'll need to emit the type information (or other context, you weren't very specific) in your object/executable files, so that your eval can read it at runtime. You might look at Java's .class file format for inspiration; Java doesn't have eval as such, but you can dynamically spin new bytecode at runtime that must be linked in a type-safe manner. David Conrad's comment is spot-on: this information can also be used to implement reflection, if your language has such a feature.
That's as much as I can help you without more specifics.

Inline function versus macro

I'm working on an iOS app using C and Objective-C, and I want to write a very small piece of code that will be executed thousands of times from more than one place. Is it safe to make this an inline function and be sure that it will always be expanded (I won't ever be taking its address) or should I make it a macro? The code is small and it will be executed very frequently, so I'd like to make sure I won't end up with thousands of function calls for it, but still I'd like the type safety of the function approach if possible...
If you want to be sure that a function is inlined, make it "extern inline" (this is a GNU-C feature). Such functions are only used for inlining; the compiler will never generate a "real" function for it. Thus, if the inlining fails, you should be getting linker errors. I assume clang has "inherited" this feature.
In general, always use inline instead of macros, if possible. There's a reason why many C-compilers had it for ages, and C++ finally added it as a core feature; it makes things a lot safer and reliable to use. There are still things that need macros, but those are few and far between.
Yes, you should use an inline function over a macro.
The performance will be identical to a macro (the code is inline, after all) and you'll get type safety as well.
N.B., this assumes that your function is simple enough for the compiler to inline. gcc's -Winline option warns if this isn't the case; not sure what flags do the same on your platform.
Also see this post for cases when you might prefer a macro (e.g., deferred evaluation)--but based on your question it sounds like inline function is the clear choice.
I may be wrong, but I understand a compiler can only inline functions which are in the same source file. If your inline function is in file A and you're trying to use it elsewhere, it cannot be inlined, unless the linker does link-time optimization.
This is because the compiler only compiles one C file at a time into one object file. It cannot obtain the inlined function from another object file, because firstly, it may not yet have been compiled and secondly, it wouldn't know which object file to look for anyway.

Auto-generate Objective-C method headers from implementation?

Is there a tool that will take a list of Objective-C methods and produce the corresponding header definitions?
Often when writing code in my implementation file, I find I need to add, remove, or modify method definitions. This requires the tedious (and thoroughly-automatable) step of switching back to my header file and making the exact same changes, twice.
What ever happened to DRY? What kind of tools can I use to make life easier here? Thanks.
You can try Accessorizer:
http://www.kevincallahan.org/software/accessorizer.html
It automates most work regarding properties, it might work for methods too.
Sadly, it's not free.
I don't know of any existing tools (although Interface Builder does allow you to define outlets and actions, and then generate a header and implementation skeleton for you based on those). Remember though that the implementation file can contain information that should not go in the header (such as private methods and instance variables/properties), so it would be difficult for any tool to do this in any case.
For the moment, in Xcode, you can split the window and see both side by side (in Xcode 4, this is the Assistant). Alternatively, you can press Alt-Command-UpArrow to see the corresponding file.