PowerShell Remoting Serialization and Deserialization - serialization

Are the routines for serializing and deserializing objects from PowerShell (as performed by PowerShell Remoting) available?
I'd like to avoid having to write the objects to disk (with Export-CliXML) and reading it back in with (Import-CliXML).
Basically, I want to get the property bags that the deserialization creates so that I can add them to an AppFabric object cache. Otherwise, AppFabric tries to use .NET serialization, which fails for a number of the standard object types.
Perhaps through the $host or $executioncontext variables?

They have published the PowerShell Remoting Specification which would give you the spec, but the source code they used to implement it is not public at this time. http://msdn.microsoft.com/en-us/library/dd357801(PROT.10).aspx

Oh, I see what you're asking, you're looking for a ConvertTo-CliXml similar to how ConvertTo-Csv works in place of Export-Csv. At first glance it sounds like you're trying to avoid CliXml entirely.
In that case, there's one on PoshCode: ConvertTo-CliXml ConvertFrom-CliXml
Here's a verbatim copy to give you an idea (I haven't checked this for correctness):
function ConvertTo-CliXml {
param(
[Parameter(Position=0, Mandatory=$true, ValueFromPipeline=$true)]
[ValidateNotNullOrEmpty()]
[PSObject[]]$InputObject
)
begin {
$type = [PSObject].Assembly.GetType('System.Management.Automation.Serializer')
$ctor = $type.GetConstructor('instance,nonpublic', $null, #([System.Xml.XmlWriter]), $null)
$sw = New-Object System.IO.StringWriter
$xw = New-Object System.Xml.XmlTextWriter $sw
$serializer = $ctor.Invoke($xw)
$method = $type.GetMethod('Serialize', 'nonpublic,instance', $null, [type[]]#([object]), $null)
$done = $type.GetMethod('Done', [System.Reflection.BindingFlags]'nonpublic,instance')
}
process {
try {
[void]$method.Invoke($serializer, $InputObject)
} catch {
Write-Warning "Could not serialize $($InputObject.GetType()): $_"
}
}
end {
[void]$done.Invoke($serializer, #())
$sw.ToString()
$xw.Close()
$sw.Dispose()
}
}

Related

Redis StackExchange LuaScripts with parameters

I'm trying to use the following Lua script using C# StackExchange library:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", KEYS[1])
if current == 1 then
redis.call(""expire"", KEYS[1], KEYS[2])
return 1
else
return current
end
Whenever i'm evaluating the script "as a string", it works properly:
var incrementValue = await Database.ScriptEvaluateAsync(LuaScriptToExecute,
new RedisKey[] { key, ttlInSeconds });
If I understand correctly, each time I invoke the ScriptEvaluateAsync method, the script is transmitted to the redis server which is not very effective.
To overcome this, I tried using the "prepared script" approach, by running:
_setCounterWithExpiryScript = LuaScript.Prepare(LuaScriptToExecute);
...
...
var incrementValue = await Database.ScriptEvaluateAsync(_setCounterWithExpiryScript,
new[] { key, ttlInSeconds });
Whenever I try to use this approach, I receive the following error:
ERR Error running script (call to f_7c891a96328dfc3aca83aa6fb9340674b54c4442): #user_script:3: #user_script: 3: Lua redis() command arguments must be strings or integers
What am I doing wrong?
What is the right approach in using "prepared" LuaScripts that receive dynamic parameters?
If I look in the documentation: no idea.
If I look in the unit test on github it looks really easy.
(by the way, is your ttlInSeconds really RedisKey and not RedisValue? You are accessing it thru KEYS[2] - shouldnt that be ARGV[1]? Anyway...)
It looks like you should rewrite your script to use named parameters and not arguments:
private const string LuaScriptToExecute = #"
local current
current = redis.call(""incr"", #myKey)
if current == 1 then
redis.call(""expire"", #myKey, #ttl)
return 1
else
return current
end";
// We should load scripts to whole redis cluster. Even when we dont have any.
// In that case, there will be only one EndPoint, one iteration etc...
_myScripts = _redisMultiplexer.GetEndPoints()
.Select(endpoint => _redisMultiplexer.GetServer(endpoint))
.Where(server => server != null)
.Select(server => lua.Load(server))
.ToArray();
Then just execute it with anonymous class as parameter:
for(var setCounterWithExpiryScript in _myScripts)
{
var incrementValue = await Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key, // or new RedisKey(key) or idk
ttl: (RedisKey)ttlInSeconds
}
)// .ConfigureAwait(false); // ? ;-)
// when ttlInSeconds is value and not key, just dont cast it to RedisKey
/*
var incrementValue = await
Database.ScriptEvaluateAsync(
setCounterWithExpiryScript,
new {
myKey: (RedisKey)key,
ttl: ttlInSeconds
}
).ConfigureAwait(false);*/
}
Warning:
Please note that Redis is in full-stop mode when executing scripts. Your script looks super-easy (you sometimes save one trip to redis (when current != 1) so i have a feeling that this script will be counter productive in greater-then-trivial scale. Just do one or two calls from c# and dont bother with this script.
First of all, Jan's comment above is correct.
The script line that updated the key's TTL should be redis.call(""expire"", KEYS[1], ARGV[1]).
Regarding the issue itself, after searching for similar issues in RedisStackExchange's Github, I found that Lua scripts do not work really well in cluster mode.
Fortunately, it seems that "loading the scripts" isn't really necessary.
The ScriptEvaluateAsync method works properly in cluster mode and is sufficient (caching-wise).
More details can be found in the following Github issue.
So at the end, using ScriptEvaluateAsync without "preparing the script" did the job.
As a side note about Jan's comment above that this script isn't needed and can be replaced with two C# calls, it is actually quite important since this operation should be atomic as it is a "Rate limiter" pattern.

How to dispose PSObject

As shown in the code below I end up creating a new PSObject for each site in the sites collection. Eventually I need to run this for multiple SharePoint web applications each containing several sites, with the total sites crossing 2000. So that is a lot of PSObjects being created.
The PSObject does not have a dispose function so what are my options to make the code more efficient and dispose the object.
Also each time i execute the operation $SiteInfoCollection += $object there is reallocation of memory since it is a dynamic expanding array. According to some article i read several months ago any operation of memory reallocation is expensive and not good for performance. Is there a way to make this step efficient as well?
$SPWebApp = Get-SPWebApplication "http://portal.contoso.com"
$SiteInfoCollection = #()
foreach($site in $SPWebApp.sites)
{
$object = New-Object PSObject
Add-Member -InputObject $object -MemberType NoteProperty -Name URL -Value ""
Add-Member -InputObject $object -MemberType NoteProperty -Name Title -Value ""
Add-Member -InputObject $object -MemberType NoteProperty -Name Template -Value ""
$object.url = $site.url
$object.title = $site.rootweb.title
$object.template = $site.rootweb.WebTemplate
$SiteInfoCollection += $object
#$object.Dispose() #NOT valid operation
}
$site.Dispose()
}
$SiteInfoCollection
This object will be garbage collected when it's time to do so. It's not a heavy SharePoint object that contains COM & Database connections and there for doesn't need to be disposed.
You want to dispose $site so put in inside the foreach loop.
For most SharePoint PowerShell command you can use -AssignmentCollections instead of manually disposing:
Manages objects for the purpose of proper disposal. Use of objects, such as SPWeb or SPSite, can use large amounts of memory and use of these objects in Windows PowerShell scripts requires proper memory management. Using the SPAssignment object, you can assign objects to a variable and dispose of the objects after they are needed to free up memory. When SPWeb, SPSite, or SPSiteAdministration objects are used, the objects are automatically disposed of if an assignment collection or the Global parameter is not used.
A more PowerShell way to get the same functionallity would be
$gc = Start-SPAssignment
Get-SPSite "http://portal.contoso.com" -Limit All -AssignmentCollection $gc | % {
# Loop
}
Stop-SPAssignment -Identity $gc

Defining variable expansion for .NET objects in PowerShell

Quick background: I am fairly new to PowerShell but am a well-versed C# dev. I have mixed feelings about PowerShell so far. I love it one day, hate it the next. Just when I think I've got it figured out, I get stumped for hours trial-and-error-ing some feature I think it should implement but doesn't.
I would like PowerShell to let me override an object's ToString() method (which it does) such that when an object is referenced inside a double-quoted string, it will call my custom ToString method (which it does not).
Example:
PS C:\> [System.Text.Encoding] | Update-TypeData -Force -MemberType ScriptMethod -MemberName ToString -Value { $this.WebName }
PS C:\> $enc = [System.Text.Encoding]::ASCII
PS C:\> $enc.ToString() ### This works as expected
us-ascii
PS C:\> "$enc" ### This calls the object's original ToString() method.
System.Text.ASCIIEncoding
How does one define the variable expansion behavior of a .NET object?
The language specification doesn't say anything about overriding variable substitution, so I don't think there's a defined way of doing exactly what you want. If a single class was involved you could subclass it in C#, but for a class hierarchy the nearest I can think of is to produce a wrapper around the object which does have your desired behaviour.
$source = #"
using System.Text;
public class MyEncoding
{
public System.Text.Encoding encoding;
public MyEncoding()
{
this.encoding = System.Text.Encoding.ASCII;
}
public MyEncoding(System.Text.Encoding enc)
{
this.encoding = enc;
}
public override string ToString() { return this.encoding.WebName; }
}
"#
Add-Type -TypeDefinition $Source
Then you can use it like this:
PS C:\scripts> $enc = [MyEncoding][System.Text.Encoding]::ASCII;
PS C:\scripts> "$enc"
us-ascii
PS C:\scripts> $enc = [MyEncoding][System.Text.Encoding]::UTF8;
PS C:\scripts> "$enc"
utf-8
PS C:\scripts> $enc.encoding
BodyName : utf-8
EncodingName : Unicode (UTF-8)
HeaderName : utf-8
WebName : utf-8
WindowsCodePage : 1200
IsBrowserDisplay : True
IsBrowserSave : True
IsMailNewsDisplay : True
IsMailNewsSave : True
IsSingleByte : False
EncoderFallback : System.Text.EncoderReplacementFallback
DecoderFallback : System.Text.DecoderReplacementFallback
IsReadOnly : True
CodePage : 65001
If you don't like the extra .encoding to get at the underlying object you could just add the desired properties to the wrapper.
I stumbled upon a semi-solution, and discovered yet another strange behavior in Powershell. It appears that when string expansion involves a collection, the user-defined ScriptMethod definition of the ToString() method is called, as opposed to the object's original ToString() method.
So, using the above example, all that is necessary is adding one lousy comma:
PS C:\> [System.Text.Encoding] | Update-TypeData -Force -MemberType ScriptMethod -MemberName ToString -Value { $this.WebName }
PS C:\> $enc = ,[System.Text.Encoding]::ASCII # The unary comma operator creates an array
PS C:\> "$enc" ### This now works
us-ascii
This seems like a bug to me.

Passing values in Header

We are consuming an external web service (WCF) in our AX2012 project. We followed the procedure described in the following blog.
We are implementing security by passing the token in the header. However, what i am not sure of is how to do this in AX2012.
the sample code for getting the token is
static void myTestServiceWSDL(Args _args)
{
myServiceWSDL.Proxies.Service.ServiceClient service;
myServiceWSDL.Proxies.Service.LoginData LoginData;
str token;
System.Exception ex;
System.Type type;
try
{
type = CLRInterop::getType('myServiceWSDL.Proxies.Service.ServiceClient');
service = AifUtil::createServiceClient(type);
LoginData = new myServiceWSDL.Proxies.Service.LoginData();
LoginData.set_uName("test");
LoginData.set_pwd("test");
token=service.Login(LoginData);
info(token);
}
catch(Exception::CLRError)
{
ex = CLRInterop::getLastException();
info(CLRInterop::getAnyTypeForObject(ex.ToString()));
}
}
The token comes back fine which confirms the code is working.
Now the question is how to do i set header values for the message.
If it was C# i would have done
using (MemberMasterClient proxy = new MemberMasterClient())
{
using (OperationContextScope scope
= new OperationContextScope(proxy.InnerChannel))
{
// set the message in header
MessageHeader header =
MessageHeader.CreateHeader("SourceApplication",
"urn:spike.WCFHeaderExample:v1",
"WCFClient Application 2");
OperationContext.Current.OutgoingMessageHeaders.Add(header);
Console.WriteLine("Membership Details");
Console.WriteLine("Henry's - {0}", proxy.GetMembership("Henry"));
}
}
}
Could any one let me know how to do the equivalent in X++
One idea which has been on my mind is to write an assembly in C# which can then be called in AX2012. Will give that a go, but the idea is to code this in X++ in AX2012
The only thing you do differently in X++ is creating the proxy using the Aif utility. So basically, your C# example you listed, the only difference would be the proxy = new MemberMasterClient() which goes through AIF. All the other code you can take into X++ as-is (except for the "using"). You just need to have the right assemblies reference in the AOT, and use the full namespace in the code.
Alternatively, as you mentioned, you can just code it all in C# and call that from AX :-)

RhinoMocks AAA Syntax

I've spent a good part of the day trying to figure out why a simple RhinoMocks test doesn't return the value I'm setting in the return. I'm sure that I'm just missing something really simple but I can't figure it out. Here's my test:
[TestMethod]
public void CopyvRAFiles_ShouldCallCopyvRAFiles_ShouldReturnTrue2()
{
FileInfo fi = new FileInfo(#"c:\Myprogram.txt");
FileInfo[] myFileInfo = new FileInfo[2];
myFileInfo[0] = fi;
myFileInfo[1] = fi;
var mockSystemIO = MockRepository.GenerateMock<ISystemIO>();
mockSystemIO.Stub(x => x.GetFilesForCopy("c:")).Return(myFileInfo);
mockSystemIO.Expect(y => y.FileCopyDateCheck(#"c:\Myprogram.txt", #"c:\Myprogram.txt")).Return("Test");
CopyFiles copy = new CopyFiles(mockSystemIO);
List<string> retValue = copy.CopyvRAFiles("c:", "c:", new AdminWindowViewModel(vRASharedData));
mockSystemIO.VerifyAllExpectations();
}
I have an interface for my SystemIO class I'm passing in a mock for that to my CopyFiles class. I'm setting an expectation on my FileCopyDatCheck method and saying that it should Return("Test"). When I step through the code, it returns a null insteaed. Any ideas what I'm missing here?
Here's my CopyFiles class Method:
public List<string> CopyvRAFiles(string currentDirectoryPath, string destPath, AdminWindowViewModel adminWindowViewModel)
{
string fileCopied;
List<string> filesCopied = new List<string>();
try
{
sysIO.CreateDirectoryIfNotExist(destPath);
FileInfo[] files = sysIO.GetFilesForCopy(currentDirectoryPath);
if (files != null)
{
foreach (FileInfo file in files)
{
fileCopied = sysIO.FileCopyDateCheck(file.FullName, destPath + file.Name);
filesCopied.Add(fileCopied);
}
}
//adminWindowViewModel.CheckFilesThatRequireSystemUpdate(filesCopied);
return filesCopied;
}
catch (Exception ex)
{
ExceptionPolicy.HandleException(ex, "vRAClientPolicy");
Console.WriteLine("{0} Exception caught.", ex);
ShowErrorMessageDialog(ex);
return null;
}
}
I would think that "fileCopied" would have the Return value set by the Expect. The GetFilesForCopy returns the two files in myFileInfo. Please Help. :)
thanks in advance!
A mock will not start returning recorded answers until it is switched to replay mode with Replay(). Stubs and mocks do no work in the same way. I have written a blog post about the difference.
Also note that you are mixing the old record-replay-verify syntax with the new arrange-act-assert syntax. With AAA, you should not use mocks and Expect. Instead, use stubs and AssertWasCalled like this:
[TestMethod]
public void CopyvRAFiles_ShouldCallCopyvRAFiles_ShouldReturnTrue2()
{
// arrange
FileInfo fi = new FileInfo(#"c:\Myprogram.txt");
FileInfo[] myFileInfo = new FileInfo[2];
myFileInfo[0] = fi;
myFileInfo[1] = fi;
var stubSystemIO = MockRepository.GenerateStub<ISystemIO>();
stubSystemIO.Stub(
x => x.GetFilesForCopy(Arg<string>.Is.Anything)).Return(myFileInfo);
stubSystemIO.Stub(
y => y.FileCopyDateCheck(
Arg<string>.Is.Anything, Arg<string>.Is.Anything)).Return("Test");
CopyFiles copy = new CopyFiles(mockSystemIO);
// act
List<string> retValue = copy.CopyvRAFiles(
"c:", "c:", new AdminWindowViewModel(vRASharedData));
// make assertions here about return values, state of objects, stub usage
stubSystemIO.AssertWasCalled(
y => y.FileCopyDateCheck(#"c:\Myprogram.txt", #"c:\Myprogram.txt"));
}
Note how setting up the behavior of stubs at the start is separate from the assertions at the end. Stub does not set any expectations.
The advantage of seperating behavior and assertions is that you can make less assertions per test, making it easier to diagnose why a test failed.
Does the method FileCopyDateCheck really get called with the exact strings #"c:\Myprogram.txt" and #"c:\Myprogram.txt" as arguments?
I am not sure if FileInfo is doing something with c:\. Maybe it is modified to upper case C:\, which would make your expectation not working.
Maybe try an expectation which does not check for the exact argument values
mockSystemIO.Expect(y => y.FileCopyDateCheck(Arg<string>.Is.Anything, Arg<string>.Is.Anything)).Return("Test");
For more details about argument constraints see: Rhino Mocks 3.5, Argument Constraints
I am pretty sure that there are also possibilities to make the expectation case insensitive.
I think it's because your CopyvRAFiles() method isn't virtual.