I'm building a C# application which uses plug-ins. The application must guarantee to the user that plug-ins will not do whatever they want on the user machine, and will have less privileges that the application itself (for example, the application can access its own log files, whereas plug-ins cannot).
I considered three alternatives.
Using System.AddIn. I tried this alternative first, because it seamed much powerful, but I'm really disappointed by the need of modifying the same code seven times in seven different projects each time I want to modify something. Besides, there is a huge number of problems to solve even for a simple Hello World application.
Using System.Activator.CreateInstance(assemblyName, typeName). This is what I used in the preceding version of the application. I can't use it nevermore, because it does not provide a way to restrict permissions.
Using System.Activator.CreateInstance(AppDomain domain, [...]). That's what I'm trying to implement now, but it seems that the only way to do that is to pass through ObjectHandle, which requires serialization for every used class. Although plug-ins contain WPF UserControls, which are not serializable.
So is there a way to create plug-ins containing UserControls or other non serializable objects and to execute those plug-ins with a custom PermissionSet ?
One thing you could do is set the current AppDomain's policy level to a restricted permission set and add evidence markers to restrict based on strong name or location. The easiest would probably be to require plugins are in a specific directory and give them a restrictive policy.
e.g.
public static void SetRestrictedLevel(Uri path)
{
PolicyLevel appDomainLevel = PolicyLevel.CreateAppDomainLevel();
// Create simple root policy normally with FullTrust
PolicyStatement fullPolicy = new PolicyStatement(appDomainLevel.GetNamedPermissionSet("FullTrust"));
UnionCodeGroup policyRoot = new UnionCodeGroup(new AllMembershipCondition(), fullPolicy);
// Build restrictred permission set
PermissionSet permSet = new PermissionSet(PermissionState.None);
permSet.AddPermission(new SecurityPermission(SecurityPermissionFlag.Execution));
PolicyStatement permissions = new PolicyStatement(permSet, PolicyStatementAttribute.Exclusive);
policyRoot.AddChild(new UnionCodeGroup(new UrlMembershipCondition(path.ToString()), permissions));
appDomainLevel.RootCodeGroup = policyRoot;
AppDomain.CurrentDomain.SetAppDomainPolicy(appDomainLevel);
}
static void RunPlugin()
{
try
{
SetRestrictedLevel(new Uri("file:///c:/plugins/*"));
Assembly a = Assembly.LoadFrom("file:///c:/plugins/ClassLibrary.dll");
Type t = a.GetType("ClassLibrary.TestClass");
/* Will throw an exception */
t.InvokeMember("DoSomething", BindingFlags.InvokeMethod | BindingFlags.Public | BindingFlags.Static,
null, null, null);
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
Of course this isn't rigorously tested and CAS policy is notoriously complex so there is always a risk that this code might allow some things to bypass the policy, YMMV :)
Related
We have implemented a custom construction heuristic with Optaplanner 7. We didn't use a simple CustomPhaseCommand; Instead, we extend MoveSelectorConfig and override buildBaseMoveSelector to return our own MoveFactory wrapped in a MoveIteratorFactoryToMoveSelectorBridge. We decided to do so, because it gives us the following advantages:
global termination config is supported out of the box
type safe configuration from code (no raw Strings)
With Optaplanner 8 the method buildBaseMoveSelector is gone from the MoveSelectorConfig API and building a custom config class seems to be prevented in the new implementation of MoveSelectorFactory.
Is it still possible to inject a proper custom construction heuristic into the Optaplanner 8 configuration and if yes, how? Or should we be using a CustomPhaseCommand with a custom self-implemented termination?
EDIT:
For clarity, in Optaplanner 7 we had the following snippet in our Optaplanner-config (defined in kotlin code):
ConstructionHeuristicPhaseConfig().apply {
foragerConfig = ConstructionHeuristicForagerConfig().apply {
pickEarlyType = FIRST_FEASIBLE_SCORE
}
entityPlacerConfig = QueuedEntityPlacerConfig().apply {
moveSelectorConfigList = listOf(
CustomMoveSelectorConfig().apply {
someProperty = 1
otherProperty = 0
}
)
}
},
CustomMoveSelectorConfig extends MoveSelectorConfig and overrides buildBaseMoveSelector:
class CustomMoveSelectorConfig(
var someProperty: Int = 0,
var otherProperty: Int = 0,
) : MoveSelectorConfig<CustomMoveSelectorConfig>() {
override fun buildBaseMoveSelector(
configPolicy: HeuristicConfigPolicy?,
minimumCacheType: SelectionCacheType?,
randomSelection: Boolean,
): MoveSelector {
return MoveIteratorFactoryToMoveSelectorBridge(
CustomMoveFactory(someProperty, otherProperty),
randomSelection
)
}
To summarize: We really need to plug our own MoveSelector with the custom factory. I think this is not possible with Optaplanner 8 at the moment.
Interesting extension.
Motivation for the changes in 8:
the buildBaseMoveSelector was not public API (the config package was not in the api package, we only guaranteed XML backwards compatibility for package config in 7). Now, we also guarantee API backwards compatibility for package config, so including programmatic configuration, because we moved all build* methods out of it.
In 8.2 or later we want to internalize the configuration in the SolverFactory, so we can build thousands of Solver instances faster. For example, loading classes wouldn't to be done only at SolverFactory build, once, no longer at every Solver build.
Anyway, let's first see if you can use the default way to override the moves of the CH, by explicitly configuring the queuedEntityPlacer and it's MoveIteratorFactory? https://docs.optaplanner.org/latestFinal/optaplanner-docs/html_single/#allocateEntityFromQueueConfiguration
I guess not, because you'd need mimic support... for the selected entity you need to generate n moves, but always the same entity during 1 placement (hence the need for mimicing).
Clearly, the changes in 8 prevent users from plugging in their own MoveSelectors (= which is an internal API, but anyway). We might be able to add an internal API to allow that again.
I am trying to create an aws instance using jclouds 1.9.0 and then run a script on it (via ssh). I am following the example locate here but I am getting authentication failed errors when the client (java program) tries to connect at the instance. The AWS console show that instance is up and running.
The example tries to create a LoginCrendentials object
String user = System.getProperty("user.name");
String privateKey = Files.toString(new File(System.getProperty("user.home") + "/.ssh/id_rsa"), UTF_8);
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
which is latter used from the ssh client
responses = compute.runScriptOnNodesMatching(
inGroup(groupName), // predicate used to select nodes
exec(command), // what you actually intend to run
overrideLoginCredentials(login) // use my local user & ssh key
.runAsRoot(false) // don't attempt to run as root (sudo)
.wrapInInitScript(false));
Some Login information are injected to the instance with following commands
Statement bootInstructions = AdminAccess.standard();
templateBuilder.options(runScript(bootInstructions));
Since I am on Windows machine the creation of LoginCrendentials 'fails' and thus I alter its code to
String user = "ec2-user";
String privateKey = "-----BEGIN RSA PRIVATE KEY-----.....-----END RSA PRIVATE KEY-----";
return LoginCredentials.builder().user(user).privateKey(privateKey).build();
I also to define the credentials while building the template as described in "EC2: In Depth" guide but with no luck.
An alternative is to build instance and inject the keypair as follows, but this implies that I need to have the ssh key stored in my AWS console, which is not currently the case and also breaks the functionality of running a script (via ssh) since I can not infer the NodeMetadata from a RunningInstance object.
RunInstancesOptions options = RunInstancesOptions.Builder.asType("t2.micro").withKeyName(keypair).withSecurityGroup(securityGroup).withUserData(script.getBytes());
Any suggestions??
Note: While I am currently testing this on aws, I want to keep the code as decoupled from the provider as possible.
Update 26/10/2015
Based on #Ignasi Barrera answer, I changed my implementation by adding .init(new MyAdminAccessConfiguration()) while creating the bootInstructions
Statement bootInstructions = AdminAccess.standard().init(new MyAdminAccessConfiguration());
templateBuilder.options(runScript(bootInstructions));
Where MyAdminAccessConfiguration is my own implementation of the AdminAccessConfiguration interface as #Ignasi Barrera described it.
I think the issue relies on the fact that the jclouds code runs on a Windows machine and jclouds makes some Unix assumptions by default.
There are two different things here: first, the AdminAccess.standard() is used to configure a user in the deployed node once it boots, and later the LoginCredentials object passed to the run script method is used to authenticate against the user that has been created with the previous statement.
The issue here is that the AdminAccess.standard() reads the "current user" information and assumes a Unix System. That user information is provided by this Default class, and in your case I'm pretty sure it will fallback to the catch block and return an auto-generated SSH key pair. That means, the AdminAccess.standard() is creating a user in the node with an auto-generated (random) SSH key, but the LoginCredentials you are building don't match those keys, thus the authentication failure.
Since the AdminAccess entity is immutable, the better and cleaner approach to fix this is to create your own implementation of the AdminAccessConfiguration interface. You can just copy the entire Default class and change the Unix specific bits to accommodate the SSH setup in your Windows machine. Once you have the implementation class, you can inject it by creating a Guice module and passing it to the list of modules provided when creating the jclouds context. Something like:
// Create the custom module to inject your implementation
Module windowsAdminAccess = new AbstractModule() {
#Override protected void configure() {
bind(AdminAccessConfiguration.class).to(YourCustomWindowsImpl.class).in(Scopes.SINGLETON);
}
};
// Provide the module in the module list when creating the context
ComputeServiceContext context = ContextBuilder.newBuilder("aws-ec2")
.credentials("api-key", "api-secret")
.modules(ImmutableSet.<Module> of(windowsAdminAccess, new SshjSshClientModule()))
.buildView(ComputeServiceContext.class);
I noticed this in the debug environment where I have to do many re-installs in order to test persistent data storage, initial settings, etc... It may not be relevant in production, but I mention this anyway just to inform other developers.
Any files created by an app in its App Folder are not 'visible' to queries after manual un-install / re-install (from IDE, for instance). The same applies to the 'Encoded DriveID' - it is no longer valid.
It is probably 'by design' but it effectively creates 'orphans' in the app folder until manually cleaned by 'drive.google.com > Manage Apps > [yourapp] > Options > Delete hidden app data'. It also creates problem if an app relies on finding of files by metadata, title, ... since these seem to be gone. As I said, not a production problem, but it can create some frustration during development.
Can any of friendly Googlers confirm this? Is there any other way to get to these files after re-install?
Try this approach:
Use requestSync() in onConnected() as:
#Override
public void onConnected(Bundle connectionHint) {
super.onConnected(connectionHint);
Drive.DriveApi.requestSync(getGoogleApiClient()).setResultCallback(syncCallback);
}
Then, in its callback, query the contents of the drive using:
final private ResultCallback<Status> syncCallback = new ResultCallback<Status>() {
#Override
public void onResult(#NonNull Status status) {
if (!status.isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
query = new Query.Builder()
.addFilter(Filters.and(Filters.eq(SearchableField.TITLE, "title"),
Filters.eq(SearchableField.TRASHED, false)))
.build();
Drive.DriveApi.query(getGoogleApiClient(), query)
.setResultCallback(metadataCallback);
}
};
Then, in its callback, if found, retrieve the file using:
final private ResultCallback<DriveApi.MetadataBufferResult> metadataCallback =
new ResultCallback<DriveApi.MetadataBufferResult>() {
#SuppressLint("SetTextI18n")
#Override
public void onResult(#NonNull DriveApi.MetadataBufferResult result) {
if (!result.getStatus().isSuccess()) {
showMessage("Problem while retrieving results");
return;
}
MetadataBuffer mdb = result.getMetadataBuffer();
for (Metadata md : mdb) {
Date createdDate = md.getCreatedDate();
DriveId driveId = md.getDriveId();
}
readFromDrive(driveId);
}
};
Job done!
Hope that helps!
It looks like Google Play services has a problem. (https://stackoverflow.com/a/26541831/2228408)
For testing, you can do it by clearing Google Play services data (Settings > Apps > Google Play services > Manage Space > Clear all data).
Or, at this time, you need to implement it by using Drive SDK v2.
I think you are correct that it is by design.
By inspection I have concluded that until an app places data in the AppFolder folder, Drive does not sync down to the device however much to try and hassle it. Therefore it is impossible to check for the existence of AppFolder placed by another device, or a prior implementation. I'd assume that this was to try and create a consistent clean install.
I can see that there are a couple of strategies to work around this:
1) Place dummy data on AppFolder and then sync and recheck.
2) Accept that in the first instance there is the possibility of duplicates, as you cannot access the existing file by definition you will create a new copy, and use custom metadata to come up with a scheme to differentiate like-named files and choose which one you want to keep (essentially implement your conflict merge strategy across the two different files).
I've done the second, I have an update number to compare data from different devices and decide which version I want so decide whether to upload, download or leave alone. As my data is an SQLite DB I also have some code to only sync once updates have settled down and I deliberately consider people updating two devices at once foolish and the results are consistent but undefined as to which will win.
When creating an COM object through ATL, the default .rgs file always registers the object to HKCR:
HKCR
{
...
}
If this object is being written to the registry for the first time, writing to HKCR is equivalent to writing to HKLM, which would make the object globally visible to all users.
This seems like a rather poor idea from the stand point of registry maintenance since the HKCR/HKLM will quickly blow up with any COM objects installed by any user on the machine and thus slowing down access for all the users. In addition, any deployment module including COM objects will have to request for admin elevation on Windows with UAC, which would adversely affect deployability. So why is ATL/COM designed this way?
This article suggests that in order to register a COM object under HKCU (http://blogs.msdn.com/b/jaredpar/archive/2005/05/29/423000.aspx), DllRegisterServer must be updated. Why wouldn't it work if we simply changed HKCR to HKCU in the .rgs file? Would ATL simply ignore it? If that's the case, why does it insist on using HKCR?
Registration through DllRegisterServer is expected to provide system wide registration and COM class availability by design.
To supersede this registration behavior and register per-user, at some point another registration method was introduced: DllInstall function. Current ATL implements this out of the box offering per-user registration with "user" command line switch provided to the DllInstall function.
Below is code you have on new project generated from template (using wizard):
// DllInstall - Adds/Removes entries to the system registry per user per machine.
STDAPI DllInstall(BOOL bInstall, _In_opt_ LPCWSTR pszCmdLine)
{
HRESULT hr = E_FAIL;
static const wchar_t szUserSwitch[] = L"user";
if (pszCmdLine != NULL)
{
if (_wcsnicmp(pszCmdLine, szUserSwitch, _countof(szUserSwitch)) == 0)
{
ATL::AtlSetPerUserRegistration(true);
}
}
if (bInstall)
{
hr = DllRegisterServer();
if (FAILED(hr))
{
DllUnregisterServer();
}
}
else
{
hr = DllUnregisterServer();
}
return hr;
}
To register per-user you can use "regsvr32 /i:user MyAtlProject.dll". You are free to choose the registration you want, there are no "poor" and "good" methods - you just have options to choose from.
It is now possible to call AtlSetPerUserRegistration to specify whether the registration is done to HKEY_LOCAL_MACHINE or HKEY_CURRENT_USER without having to modify .rgs files.
Consider a WF4 project running in IIS, with a single workflow definition (xamlx) and a SqlInstanceStore for persistence.
Instead of hosting the xamlx directly, we host a WorkflowServiceHostFactory which spins up a dedicated WorkflowServiceHost on a seperate endpoint for every customer.
This has been running fine for a while, until we required a new version of the workflow definition, so now on top of Flow.xamlx I have Flow1.xamlx.
Since all interactions with the workflow service are wrapped with business logic which is smart enough to identify the required version, this homebrew versioning works fine for newly started workflows (both on Flow.xamlx and Flow1.xamlx).
However, workflows started before this change fail to be reactivated (on a post the servicehost throws an UnknownMessageReceived exception).
Since WF isn't overly verbose in telling you WHY it can't reactivate the workflow (wrong version, instance not found, lock, etc), we attached a SQL profiler to the database.
It turns out the 'WorkflowServiceType' the WorkflowServiceHost uses in its queries is different from the stored instances' WorkflowServiceType. Likely this is why it fails to detect the persisted instance.
Since I'm pretty sure I instance the same xamlx, I can't understand where this value is coming from. What parameters go into the calculation of this Guid, does the environment matter (sitename), and what can I do to reactivate the workflow ?
In the end I decompiled System.Activities.DurableInstancing. The only setter for WorkflowHostType on SqlWorkflowInstanceStore was in ExtractWorkflowHostType:
private void ExtractWorkflowHostType(IDictionary<XName, InstanceValue> commandMetadata)
{
InstanceValue instanceValue;
if (commandMetadata.TryGetValue(WorkflowNamespace.WorkflowHostType, out instanceValue))
{
XName xName = instanceValue.Value as XName;
if (xName == null)
{
throw FxTrace.Exception.AsError(new InstancePersistenceCommandException(SR.InvalidMetadataValue(WorkflowNamespace.WorkflowHostType, typeof(XName).Name)));
}
byte[] bytes = Encoding.Unicode.GetBytes(xName.ToString());
base.Store.WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
this.fireRunnableInstancesEvent = true;
}
}
I couldn't clearly disentangle the calling code path, so I had to find out at runtime by attaching WinDbg/SOS to IIS and breaking on HashHelper.ComputeHash.
I was able to retreive the XName that goes into the hash calculation, which has a localname equal to the servicefile, and a namespace equal to the [sitename]/[path]/.
In the end the WorkflowHostType calculation comes down to:
var xName = XName.Get("Flow.xamlx.svc", "/examplesite/WorkflowService/1/");
var bytes = Encoding.Unicode.GetBytes(xName.ToString());
var WorkflowHostType = new Guid(HashHelper.ComputeHash(bytes));
Bottomline: apparently workflows can only be rehydrated when the service filename, sitename and path are all identical (case sensitive) as when they were started