Configuration for PowerShell module created via .NET framework - wcf

What's the best practice when you have dependencies that you want to be able to configure when creating a PowerShell module in C#?
My specific scenario is that the PowerShell module I am creating via C# code will use a WCF service. Hence, the service's URL must be something that the clients can configure.
Is there a standard approach on this? Or will this be something that must be custom implemented?

A somewhat standard way to do this is to allow a value to be provided as a parameter or default to reading special variable via PSCmdlet's GetVariableValue. This is what the built-in Send-MailMessage cmdlet does. It reads the variable PSEmailServer if no server is provided.

I might not be understanding your question. So I'll posit a few scenarios:
You PS module will always use the same WCF endpoint. In that case you could hardcode the URL in the module
You have a limited number of endpoints to choose from, and there's some algorithm or best practice to associate an endpoint with a particular user, such as the closest geographically, based on the dept or division the user is in, etc.
It's completely up to the end user's preference to choose a URL.
For case #2, I suggest you implement the algorithm/best practice and save the result someplace - as part of the module install.
For case #3, using an environment variable seems reasonable, or a registry setting, or a file in one of the user's profile directories. Probably more important than where you persist the data though, is the interface you give users to change the setting. For example if you used an environment variable, it would be less friendly to tell the user to go to Control Panel, System, Advanced, Environment, User variable, New..., than to provide a simple PS function to change the URL. In fact I'd say providing a cmdlet/function to perform configuration is the closest to a "standard" I can think of.

Related

How to organize endpoints when using FeathersJS's seemingly restrictive api methods?

I'm trying to figure out if FeathersJS suits my needs. I have looked at several examples and use cases. FeathersJS uses a set of request methods : find, get, create, update, patch and delete. No other methods let alone custom methods can be implemented and used, as confirmed on this other SO post..
Let's imagine this application where users can save their app settings. Careless of following method conventions, I would create an endpoint describing the action that is performed by the user. In this case, we could have, for instance: /saveSettings. Knowing there won't be any setting-finding, -creation, -updating (only some -patching) or -deleting. I might also need a /getSettings route.
My question is: can every action be reduced down to these request methods? To me, these actions are strongly bound to a specific collection/model. Sometimes, we need to create actions that are not bound to a single collection and could potentially interact with more than one collection/model.
For this example, I'm guessing it would be translated in FeathersJS with a service named Setting which would hold two methods: get() and a patch().
If that is the correct approach, it looks to me as if this solution is more server-oriented than client-oriented in the sense that we have to know, client-side, what underlying collection is going to get changed or affected. It feels like we are losing some level of freedom by not having some kind of routing between endpoints and services (like we have in vanilla ExpressJS).
Here's another example: I have a game character that can skill-up. When the user decides to skill-up a particular skill, a request is sent to the server. This endpoint can look like POST: /skillUp What would it be in FeathersJS? by implementing SkillUpService#create?
I hope you get the issue I'm trying to highlight here. Do you have some ideas to share or recommendations on how to organize the API in this particular framework?
I'm not an expert of featherJs, but if you build your database and models with a good logic,
these methods are all you need :
for the settings example, saveSettings corresponds to setting.patch({options}) so to the route settings/:id?options (method PATCH) since the user already has some default settings (created whith the user). getSetting would correspond to setting.find(query)
To create the user AND the settings, I guess you have a method to call setting.create({defaultOptions}) when the user CREATE route is called. This would be the right way.
for the skillUp route, depends on the conception of your database, but I guess it would be something like a table that gives you the level/skills/character, so you need a service for this specific table and to call skillLevel.patch({character, level})
In addition to the correct answer that #gui3 has already given, it is probably worth pointing out that Feathers is intentionally restricting in order to help you create RESTful APIs which focus on resources (data) and a known set of methods you can execute on them.
Aside from the answer you linked, this is also explained in more detail in the FAQ and an introduction to REST API design and why Feathers does what it does can be found in this article: Design patterns for modern web APIs. These are best practises that helped scale the internet (specifically the HTTP protocol) to what it is today and can work really well for creating APIs. If you still want to use the routes you are suggesting (which a not RESTful) then Feathers is not the right tool for the job.
One strategy you may want to consider is using a request parameter in a POST body such as { "action": "type" } and use a switch statement to conditionally perform the desired action. An example of this strategy is discussed in this tutorial.

What would be the correct URI for an endpoint that shares a file with other users?

I've been reading through the best practices when it comes to API definitions and one of the most common recommendations is to make sure that your endpoint definitions do not contain verbs in the path (only nouns should be used for the resources as well as values for path parameters). Instead, the HTTP methods should be used as the "verbs" to perform actions over the resources.
The thing is, suppose I want to create an endpoint that allows a user to share a file with other users. The way I would do it would be as follows:
POST /api/file/{file_id}/share/
With a request body that would look as follows
{ users: [1, 2, ... , N] }
For me this is the most intuitive way to do it since I am performing an operation over the file resource but none of the HTTP methods is enough to describe the operation and I am forced to use the /share/ in order to be able to specify the action to perform.
I am thus violating the best practices of only using nouns in the endpoint path but I dont see any other possible way to do it except adding query parameters, but those are usually done for filters, sorting, etc.
What would be an adequate way of defining such endpoint?
I think your solution is OK. You can take /share as the sub-resource of /api/file/{file_id} -- the sharing status of specific file.
So, to share a (not shared yet) file with other users:
POST /api/file/{file_id}/share
And to change the user list that file shared with:
PUT /api/file/{file_id}/share
To me, /share is a good choice -- it's concise and descriptive. However, if you really want to use a noun instead of verb, you can use /sharing or /share-status etc.

How to register a Property Handler on folders?

I built a virtual filesystem (not a namespace extension) for Windows which acts as a frontend of our document management server consisting of files and folders. In order to be able to display some metadata of the DMS objects in Windows Explorer as additional selectable columns, I successfully provided properties to the Windows Property System by implementing a COM Property Handler. Wheras normal property handlers focus on specific file types for which they feel responsible, my Property Handler adds properties to all files regardless of their type. Because Property Handlers can only be registered on the file type level, I registered my handler for about 30 types under
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\PropertySystem\PropertyHandlers\<.Extension>
However, I did not manage to register the Property Handler for folder objects. Since all objects in our file system are virtual I build the property store (IPropertyStore) by implementing IInitializeWithFile instead of IInitializeWithStream. The properties are requested from our DMS with the path of IInitializeWithFile acting as key and were not read from an objects content. This concept would work for folders as well.
For getting called on folders I tried to associate the handler by registering under different well known identifiers like Folder, Directory, AllFileSystemObjects and * instead of the file extension without success.
I also didn’t find anything in the MSDN documentation regarding this aspect.
Is there a way to register a Windows Property Handler on folders? Or is there some other way to add custom columns to folders in Windows Explorer?
I'm not sure if it is possible to do this.
Property handlers are clearly not the right approach, they are system wide and there can only be one per file extension. They should only be implemented by the software that "owns" the file extension and can parse the file to extract properties.
The old column handlers would have been your best bet (IMHO) but they are officially dead and you already said you can't use them.
Have you considered creating a namespace extension? Either as a root item somewhere (Desktop or My Computer) the way My Documents used to work in 2000/XP or maybe something more along the lines of how OneDrive works?
I'm not sure if desktop.ini files work in the root of a drive but it might be worth looking into. You would then find yourself in the poorly documented land of [.ShellClassInfo] and its CLSID, CLSID2 and UICLSID members. The general idea would be to act as a IShellFolder proxy on top of the "real" IShellFolder so you could create a multiplex property store. I think there are some (undocumented?) property keys you can override to change the folders default columns and tooltips as well.
There is also something called a delegated folder that allows you to play with nested PIDLs but the documentation is once again pretty useless so I'm not sure if this is something worth looking into.
A 3rd option is to pretend to be a cloud storage provider. I don't know if this gets you any closer to your goal and you would still have to implement some NSE bits to get to the point where you can layer yourself on top of the underlying IShellFolder. This feature is rather new and only documented to work on Windows 10.
The inner workings of how Explorer/IShellBrowser is connected to the IShellFolder/IShellView is one of the least documented parts of Windows. There are hundreds of undocumented interfaces. Explorer gives DefView special treatment leaving other 3rd-party implementations out in the cold.
My feeling is that there is no clean solution to implement this on top of a drive letter but you might get lucky, if Raymond Chen drops by he might have some tips for you...

Support for multiple environments in your windows store app

I have been working on a Windows Store app where I have to support multiple configuration parameters for my app. One of the parameters is the URL the app is talking to.
For example development environment, test, acceptance and finally production.
One of the things i'm currently thinking about is what is the most efficient way of supporting all these environments with the least effort. Because there isn't some kind of config file that we can change to update these parameters I came up with some ideas. I'm curious about other options that I might have not seen.
Here are the things I came up with:
1
Adding multiple configuration to the app and than using them in code to get the correct parameter like this:
private string webserviceUrl;
#if DEV
webserviceUrl = "devUrl";
#elif TEST
webserviceUrl = "testUrl";
#endif
2
With the approach in number 1 there are a few more options available like including a config xml file bases on the configuration, or fetching configuration settings from a webservice the first time the app is running.
3
Using a branch/merge strategy and update the config files in the branch. Advantage is that the code is clean and only contains the settings it needs for the build it's created for. And the package can be build by the build server. Disadvantage is that you need to branch/merge alot.
The last option feels like the most 'clean' solution to do this. Am I missing any options, or do you have experience with any of these methods? What would you prefer?
I think the assumption is that apps in the store will always point to production.
But, in saying that, I'm facing the same issue as we're side loading the application onto devices that we control, and not using the Windows Store at all.
To answer your question, I prefer option 1.
Option 2 and the xml/json config file seems like the best option though.
The webservice option probably won't work. What webservice URL do you use? And how will it work if you want some instances pointing to different environments as they will all be fetching the config from the same URL.
Another option you might want to consider would be options in the settings charm menu. For example, use radio buttons for the environments, and allow the user to configure which environment they want to target.
The issue would be locking it down in production for end users so that it isn't modifiable any more. Perhaps once "PROD" radio is selected, all the radio buttons are then hidden.
If you're deploying the application through side loading, then these settings could probably be configured during the install process.
I'd be interested to hear other opinions as well. This is also an old question, so I'd like to know what solution you decided on implementing.

Gaining Root Access w/ Elevated Helper & SMJobBless

I'm working on something that needs to install files periodically into a folder in /Library.
I understand that in the past I could have used one of the Authenticate methods but those have since been deprecated in 10.7.
What I've understood from my reading so far:
I should create a helper that somehow gets authenticated and have that helper do all of the moving tasks. I've taken a look at some of the sample code, including some involving XPC and one called Elevator but I'm a bit confused.
A lot of it seems to deal with setting up some sort of client / server model but I'm not sure how this would translate into me actually installing my files into the correct directories. Most of the examples are just passing strings.
My question simply: How can I create my folder in /Library programmatically and periodically write files to it while only prompting the user for a password ONCE and never again? I'm really not sure how to approach this and there doesn't seem to be much documentation.
You are correct that there isn't much documentation for this. You'll basically write another app, the helper app, which will get installed with SMJobBless(). Not surprisingly,
the tricky part here is the code signing. The least obvious part for me was that the SMAuthorizedClients and SMPrivilegedExecutables entries in the info plist files of each app are dependent on the identity/certificate that you used to sign the app with. There is also a trick with the compiler/linker to getting the info plist file compiled into the helper tool, which will be a single executable file, rather than a bundle.
Once you get the helper app up and running then you have to devise a way to communicate with it since these are two different processes. XPC is one option, perhaps the easiest. XPC is typically used with server processes, but what you are using here is the communication side of XPC only. Basically it passes dictionaries back and forth between the two apps. Create a standard format for the dictionary. I used #"action", #"source", and #"destination" with 3 different action values, #"filemove", #"filecopy", and #"makedirectory". Those are the 3 things that my helper app can do and I can easily add more if necessary.
The helper app will basically setup the XPC connection and event handler stuff and wait for a connection and commands. The commands will just be a dictionary so you check for the appropriate keys/values and do whatever.
I can provide more details and code if you need more help, but this question is 9 months old so I don't want to waste time giving you details you've already figured out.