Programatic signalling of Aurelia value converter - aurelia

I'm trying to get signalling working in value converters outside of the typical aurelia templating process.
I want to be able to signal my value converter to re-bind (as per https://aurelia.io/docs/binding/value-converters#signalable-value-converters) but I have a dynamic template and I'm applying the value converter by just getting it from the container and calling myconverter.toView(params).
Doing it this way is bypassing the calling of ValueConverter.prototype.connect which occurs as part of the template binding process. ValueConverter.prototype.connect is where the signals are registered, so my signal is not being picked up...

As you have noticed, signaling can & should be understood as a way to notified all binding that are connected with some particular value converters, if there is no connection, there's no one to receive the signals.
For your case, what I understand is you want to reuse some functionalities/methods/utilities from that value converter. If so, I think it can be achieved via extracting the utility bit out of that value converter and make it reusable where you want to do myconverter.toView(params) instead?

Related

Using micropython to initialize a UART bus, and I'm getting an error "missing 1 requires positional arguments"

I have the following code I am trying to run on an ESP-WROOM-32:
from machine import UART
def do_uart_things():
uart = UART.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
do_uart_things()
I am attempting to initialize a uart bus according to the documentation: https://docs.micropython.org/en/latest/library/machine.UART.html. The documentation suggests that only baudrate, bits, parity, and stop are required, however I get the "1 additional positional arguments required" error. I can not figure out why it is giving this error.
I am also assuming that the rx and tx parameters are automatically converted to the correct type of pin, as needed by the UART class, rather than me having to manually manage it.
I have managed to get slightly similar code working:
from machine import UART
def do_uart_things():
uart = UART(1,9600)
uart.init(baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
#Pin numbers taken from ESP data sheet--they might not be correctly formatted
do_uart_things()
Which has me thinking the documentation is unintentionally misleading, and the leading example is not intended as an "initialize it this way OR this way," but rather requires both things be done.
Am I correct in thinking the latter code example is the correct way to use the micropython UART functionalities? I am also open to referrals to any good examples of UART and I2C usage in micropython, since I've found the documentation to be a little shy of great...
"UART objects can be created and initialised using:..." can be a little misleading. They meant that the object can only be created by using the constructor, however it can be initialised either with the constructor, or later, after the object has been created, but using the init method on it.
As you see, the class constructor needs a first parameter id, whereas the method init() does not. So you can use the constructor
uart = UART(1,baudrate=9600, bits=8, parity=None, stop=1, rx=34,tx=35)
but you cannot use UART.init() as this is not a constructor but a method, so it needs to operate on an instance, not a class.

Does Binding Order Matter When Using WhenInjectedExactlyInto and a Default Binding?

With multiple Ninject modules, I end up having a binding order for a particular interface which looks like this:
Kernel.Bind<ILogger>().To<Logger>().WhenInjectedExactlyInto(typeof(TroubleshootingLogger), typeof(RegularAndStashLogger), typeof(LogStashLogger), typeof(KafkaSendClient));
Kernel.Bind<ILogger>().To<TroubleshootingLogger>();
Kernel.Bind<ILogger>().To<RegularAndStashLogger>().WhenInjectedExactlyInto<ProcessConfiguration>();
My question is if, when I call the Kernel for an instance of ProcessConfiguration, will it inject TroubleshootingLogger (the default bind), or RegularAndStashLogger (the exact bind)?
I went ahead and built a small test program to determine this myself (I acknowledge I should have done this first).
As it turns out, Ninject does appear to check all "WhenInjectedExactlyInto" bindings before falling back to a default binding.
The program (which depends on Ninject to run, duh): pastebin.com/9Kpsb25h

force VS to autoregen just before compile

VB in VS2008 under Windows 7 (64):
I need to change the value of a Property of a Component at some unpredictable time in DesignMode, and want the previously unknown new value to be embedded in the executable that results from VS compilation (as opposed to serializing it to some external file).
I have resorted to a text edit to swap the new value into the autogenerated Component initialization code in a prebuild event handler. This works fine, but it is a little hacky for my taste. Is there some way instead to force VS to refresh that text?
By luck, I found something that seems to work to force VS to autogenerate initialization code for the runtime instance of a Component, which is what I was after (I needed to have successful communication between designtime and runtime for Components -- easy for Controls, which use the latest designtime BackgroundImage bitmap at runtime (you need only to hide the Property value in the bitmap, which can be done entirely within the rules by using GetPixel and SetPixel). I considered various hacks, but I hit upon the following, which works and makes sense (though I might be completely FoS about the "why". If you know better, please educate me):
As I understand it, soon after a Component is dropped on a design surface in VS (and before it is rendered in the Component Tray), Visual Studio adds it to a collection of Components belonging to a Container. Adding it to the Container's collection is one step in a sequence of happenings that includes Visual Studio's autoregeneration of the Init procedure that will be used for the Component's root at runtime, and which includes values for the Public Properties of the Component. If you overload the Set Site procedure (the creation of ISite is an early step in that sequence) for your Component, and set a value for one of its Public Properties in the Overload, that value will show up in the autoregen text. This is almost what I wanted, except that it only worked when VS called Set Site, and I needed it to happen any time I chose.
Then I took a flyer, and in the UI that sets the Property value in question (at some unknowable time), I added code to remove the Component from the Container's collection and then re-add it, hoping that this might again set off a sequence of happenings that would lead to VS again autoregenerating the Init code, this time with the new value of the Property. It apparently did. Yay.
By deciding when to re-add a Component to the Container's Components collection, I am now able to force VS to write in the autogenerated Init text any value I assign to a Public Property of that Component, and hence embed the value in the executable when it is compiled.
This technique is vulnerable to changes in the (undocumented) way that Microsoft implements autogeneration, and so is arguably a hack. But even documented features are subject to change. Backward-compatibility is a nice idea, but sometimes it has to give way. And delivery is a requirement. It would be great to know that your code will still be good in any future version of VS, but that, sadly, can't happen, hack or no.
Of course, documented features are in general less subject to change than undocumented ones. But the logic of autogeneration after all the initial Property values are set is pretty compelling. That Microsoft uses the same sequence later on is not so inherently logical, but doing it a different way would cost Microsoft money for no apparent gain. And Microsoft and their ilk (are legally required to) make decisions based on the bottom line. So the status quo seems like a good bet.

Erlang serialization

I need to serialize a function in Erlang, send it over to another note, deserialize and execute it there. The problem I am having is with files. If the function reads from a file which is not in the second node I am getting an error. Is there a way how I can differentiate between serializable and not serializable constructs in Erlang? Thus if a function makes use of a file or pid, then it fails to serialize?
Thanks
First of all, if you are sending anonymous functions, be extremely careful with that. Or, rather, just don't do it
There are a couple of cases when this function won't even be executed or will be executed in a completely wrong way.
Every function in Erlang, even an anonymous one, belongs to some module, in the one it's been constructed inside, to be precise. If this function has been built in REPL, it's bound to erl_eval module, which is even more dangerous (I'll explain further, why).
Say, you start two nodes, one of them has a module named 'foo', and one doesn't have such a module loaded (and cannot load it). If you construct a lambda inside the module 'foo', send it to the second node and try to call it, you'll fail with {error, undef}.
There can be another funny problem. Try to make two versions of module 'foo', implement a 'bar' function inside each of them and implement a lambda inside of which (but the lambdas will be different). You'll get yet another error when trying to call a sent lambda.
I think, there could possibly be other tricky parts of sending lambdas to different nodes, but trust me, that's already quite a lot.
Secondly, there are tons of way you can get a process or a port inside a lambda without knowing it in advance
Even though there is a way of catching closured variables from a lambda (if you take a look at a binarized lambda, all the external variables that are being used inside it are listed starting from the 2nd byte), they are not the only source of potential pids or ports.
Consider an easy example: you call a self() function inside your lambda. What will it return? Right, a pid. Okay, we can probably parse the binary and catch this function call, along with a dozen of other built-in functions. But what will you do when you are calling some external function? ets:lookup(sometable, somekey)? some_module:some_function_that_returns_god_knows_what()? You don't know what they are going to return.
Now, to what you can actually do here
When working with files, always send filenames, not descriptors. If you need file's position or something, send it as well. File descriptors shouldn't be known outside the process they've been opened.
As I mentioned, do everything to avoid lambdas to be sent to other nodes. It's hard to tell how to avoid that, since I don't know your exact task. Maybe, you can send a list of functions to execute, like:
[{module1, parse_query},
{module1, dispatch_parsed_query},
{module2, validate_response},
{module2, serialize_query}]
and pass arguments through this functions sequence (be sure all the modules exist everywhere). Maybe, you can actually stick to some module that is going to be frequently changed and deployed over the entire cluster. Maybe, you might want to switch to JS/Lua and use externally started ports (Riak is using spidermonkey to process JS-written lambdas for Map/Reduce requests). Finally, you can actually get module's object code, send it over to another node and load it there. Just keep in mind it's not safe too. You can break some running processes, lose some constructed lambdas, and so on.

What COM support is needed to get my custom DirectShow filter property page to work for a remote filter from the Running Object Table

I have some custom DirectShow filters with custom property pages. These work fine when the filter is in the same process as the property page.
However when I use the 'connect to remote graph' feature of Graph Edit the property pages don't work.
When the property page does a QueryInterface for my private COM interface on the remote filter, the QueryInterface fails. Property pages of Microsoft filters (e.g. the EVR video renderer) work fine so it must be possible.
Presumably this is happening because my filter's private interfaces only work 'in process' and I need to add extra COM support so that these interfaces will work with an 'out of process' filter. What do I need to do in COM terms to achieve this?
Do the DirectShow baseclasses support these COM features? Can I reliably detect when the filter is running out of process and refuse to show the property page gracefully?
One option is to build a proxy/stub pair. But another, and way easier, is to make your private interface automation compatible (derive from IDispatch, type constranits apply), and put it into type library, which is to be attached to the DLL, and registered the usual way. Proxy/stub pair will be supplied for such interface automatically without need to bother.
DirectShow base classes do not offer built in support for this. Stock DirectShow filters provided with Windows might be not not be compatible with passing interfaces over process boundaries and my guess would be that it depends on the team in charge for respective development years ago. Video renderers, for instance, have interfaces that you can connect remotely through. Audio renderers, on the contrary, have interfaces without such capability in mind and they just crash one of the processes attempting to makes such connection (client side process, if my memory serves me right).