Running a .Net reference code in the Server in Dynamics AX 2009 - dll

We have an integration scenario, where we have done the following activities in mentioned sequence:
Created a custom C# DLL (built using .NET Framework 3.5)
Signed/strong-named using VS signing feature
Registered/published the DLL in the server GAC using GACUtil.exe
Placed the DLL in Server\Bin directory
In Dynamics AX 2009, added the reference of the DLL (it appeared in the form without browsing in file explorer, as already registered in GAC)
Restarted AOS services
We can see the DLL reference in the AX client (AOT -> Reference) installed on terminals.
Also, in the AOS, we can see the IntelliSense and code compiles without any error if we access some method in the referenced DLL.
Problem: AX client installed on terminals, is unable to read this DLL and throws a compilation error that the object does not exist.
Tried full compilation, RunOn = Server property but the issue persists.
P.S. Issue resolves if we place the DLL in Client\Bin directory but this is not an option as we have over 300 terminals and to place/update DLL in each of them is not a practical approach.
Edited:
Now, I'm running the code on the server after placing the DLL in the Client\Bin folder in the batch server which is different from the AOS server. Code executes fine on the batch server but on AOS and the terminal machines it gives an error that says:
"Object 'CLRObject' could not be created"
Please guide what am I missing. The code in the test job is pasted below:
static server void IntConCheck(Args _args)
{
AxIntegration.Integration integrationClass;
AxIntegration.ATPIntegrationRequestContract req;
;
new InteropPermission(InteropKind::CLRInterop).assert();
integrationClass = new AxIntegration.Integration();
req = new AxIntegration.ATPIntegrationRequestContract();
info(integrationClass.getATPValuesJSON(req));
}

Be aware that Dynamics AX 2009 is a 3 tier system, where business logic can be executed on the server or the client tier. For the question, the customization done for business logic executed on the client tier is relevant.
The question indicates that the .dll is called by code that is executed on the client tier, otherwise placing the .dll in the client's bin directory would not resolve the issue.
There are a several options to resolve this. The first two are related to operations (i.e. not programming related), so I will keep them short:
Automate maintenance The question mentions that deploying the .dll on 300 terminals is not a practical approach. I would question this statement. Surely there is other similar maintenance work being done on those terminals (e.g. installation or update of software). If not already in place, a solution to automate the maintenance work on those terminals should be implemented.
Use terminal servers Instead of having the Dynamics AX client installed on 300 terminals, install it on terminal servers. Depending on how many concurrent users of Dynamics AX there are, several of those might be required. But the total number should still be significantly lower than the number of terminals. It should then be a practical approach to deploy the .dll on those.
The third option is the programming option. This option depends on the nature of the .dll. If the functionality provided by it can be executed on the server tier, the code can be modified in such a way that the parts that use the .dll are always executed on the server tier. The question already mentions the RunOn property that allows to influence the executing tier. It is unclear why setting this property to Server did not resolve the issue. Reviewing How To: Run a Class on the AOS might help. To provide further guidance on this, a new question should be asked that shows with some example code what was tried to force execution on the server tier.
An alternative programming option is using Classes\SysFileDeployer to automatically deploy assemblies (client or server). See this thread discussion, this post, and this post.
PS: Be aware that as of April 12th, 2022 the extended support for Dynamics AX 2009 has ended. Components required by Dynamics AX 2009 (e.g. Windows or SQL Server) may also no longer be supported. This poses significant risk, especially from a security point of view, for any company still running this version of Dynamics. It is recommended to upgrade as soon as possible to the latest supported version.

Related

Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener

While using Windows Azure Table Storage in WCFService WebRole, tried to create CloudStorageAccount by the following way:
storageAccount =
CloudStorageAccount.Parse(Microsoft.WindowsAzure.CloudConfigurationManager.GetSetting("[Setting name]"))
Get exception:
ConfigurationErrorsException "Could not create Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener, Microsoft.WindowsAzure.Diagnostics, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35."
MSDN help says that 1) Visual Studio must be run as an administrator. 2) A role must be running under full trust (change the .NET trust level option to Full Trust).
All Done, but I still have the same exception.
One thing that can cause this error is running the web role itself, instead of running the containing cloud project. If this is the issue, you could fix it by ensuring that the cloud project is set as the startup project for debugging, and not the web role.
It's possible, and sometimes useful, to run the ASP.NET project that defines the web role on its own. This can be a lot quicker than running things in the Azure Compute Emulator. It may also enable you to develop your project without having to run VS elevated. Also, I've found that the emulator tends to cause Visual Studio to report an invalid memory access error from time to time, at which point you need to restart VS. Running the web role directly avoids all these problems.
However, there are some things that can prevent this from working, and the exception you describe is a symptom of one of these problems. If your web role's Web.config includes configuration for Azure's DiagnosticMonitorTraceListener (and Visual Studio adds that by default when you create a web role) then the first thing that tries to generate trace output will crash with the error you describe if you run outside the emulator. And as it happens, retrieving a setting from the CloudConfigurationManager appears to do this.
This isn't peculiar to the CloudConfigurationManager by the way. All it's doing is producing some trace output. VS configures web roles to send all trace output to the Azure diagnostic listener, and because that listener can only run in either the compute emulator or an actual Azure instance, the first thing that tries to produce trace output will crash. CloudConfigurationManager is a common candidate because it happens to produce trace output, and it typically gets used early on when a role starts up. But in principle, anything that produces trace output could hit this exception.
A simple way to avoid this is to remove the relevant section from the configuration file. When you create a new web role, Visual Studio adds a <system.diagnostics> section that configures the default trace output to go to the Azure diagnostic listener. You could just comment that out. That will enable you to debug the web role directly in Visual Studio without using the compute emulator (assuming you aren't doing anything else that depends on being in a role environment).
Of course, the problem with that is that you'll no longer get any diagnostic traces when running in Azure. One way to solve that is to move the relevant configuration to the Web.config.Release file (adding the necessary xdt: attributes).
This change will also stop the Azure diagnostic trace listener from running when you use the local compute emulator. (That's less of a problem, because the trace messages will still appear in the debugger. It just means you won't get persistent copies of the traces copied to table storage like you would when running for real.) The obvious way to fix this would seem to be to make a similar modification to Web.config.Debug (or to run the release build in the emulator), but there's a snag: apparently cloud projects do not apply configuration file transforms when packaging for the emulator by default. Fortunately, you can fix this: http://blog.hill-it.be/2011/03/07/no-web-config-transformation-in-local-azure/ shows how to enable transforms for local debugging in the compute emulator. (Transforms are never applied when debugging an ASP.NET project directly from within VS, by the way.)
I've found that this error is caused by the wrong version in your web.config
Ie., you may not have
Version=1.0.0.0
Microsoft.WindowsAzure.Diagnostics is up to version 1.8.0.0 as of now
Try updating to the current version
Remove the lines in Web.config < add type="Microsoft.WindowsAzure.Diagnostics.DiagnosticMonitorTraceListener

Publish Biztalk WCF Service with multiple schema files and <includes>

What I have here is this:
A Biztalk project in Visual Studio 2010, a corresponding Biztalk application running on a Biztalk 2010 server. The receive port accepts an HL7-V3 schema, transforms it to a schema that is sent off to a SQL server 2008 instance and inserted into tables via a stored procedure. When the receive port is using the FILE adapter, all works as intended (data from the HL7 file is inserted into tables).
So, we reached the point where a web service was needed in order to expose the Receive port via the web...great we have the "Biztalk WCF Service Publishing Wizard" built right into VS2010. This is where I'm stopped in my tracks.
I can follow the wizard as far as the "Create" step, it makes it about half-way to the Extracting Schemas from Biztalk Assembly then it barfs and throws a generic error:
"The given key was not present in the dictionary"
After much searching and head scratching, I was finally led to fact that the wizard uses Xsd.exe (new to me) to generate code from the schemas. This led me to the MSDN library article Here which states that included schemas are ignored by Xsd.exe. Well, the HL7V3 schema-set for the message we are using has about 30 files altogether - all referencing each other all over the place like so:
<xs:include schemaLocation="../coreschemas/infrastructureRoot.xsd"/>
<xs:include schemaLocation="COCT_MT050002UV07.xsd"/>
<xs:include schemaLocation="COCT_MT090100UV01.xsd"/>
<xs:include schemaLocation="COCT_MT240000UV01.xsd"/>
<xs:include schemaLocation="COCT_MT150000UV02.xsd"/>
So there's my problem.
So now my question is this: Is there a way to manually create a WCF service from a Biztalk project, or better yet, just get the Wizard to work for this case? Or, just any suggestions on where to look, as this is my first Biztalk project.
My Googling has only come up with a plethora of how-to's for the Wizard.
Well, the problem has been solved, despite running down way too many rabbit holes, I stumbled upon an MSDN called Getting Started with HL7 v3 and Biztalk Server 2006 article with a little section called Schema Modifications. One of the modifications is to add Target Namespace to some of the coreschema files in HL7 v3.
I had seen this doc in the past and it mentions that this fixes the issue of them being not supported when compiling schemas in BizTalk Server. I kind of ignored it because I was getting no errors and besides, I was using 2010, not 2006 so I naively thought "that must be fixed now...no errors"
Not so, I did exactly as the document suggested and immediately deployed and ran the Biztalk WCF Service Publishing Wizard and it all worked and I was able to view the help and .wsdl pages that were generated.
I hope this helps someone in the future. Very anti-climactic for me.

ASP.NET gurus - small issue when setting app domain name for sharing SQL session in scale-out scenario

We have scaled-out some portions of our ASP.NET app to run on one server, and other portions to run on another server (& under a subdomain).
The two servers share (SQL Server) Session. We used this MS article to create a tiny HTTP Module to sync app domain name between the two servers (sans the cookie domain code, which can be configured in the web.config. I later found this CodeProject article which is essentially the same.)
Everything's working well, except for a small issue: deployment changes or web.config tweaks require a manual app pool recycle (the auto-recycle no longer works - instead we get the "web server is currently unavailable / hit refresh" error).
I tried moving the app domain naming code from the HTTP Module into the Application_Start section of the Global.asax (maybe this is a better place for it?) - but received the same problem.
I know that one solution is to hard-code the app name in one of the SQL Server Session stored procedures; but am a bit hesitant to do this.
Edit: The app is ASP.NET 3.5 under IIS 6.0 (thanks #Chris & #bzlm)
You should check if proper Recycling Events are turned on in IIS, maybe this can help http://support.microsoft.com/kb/332088
Update. We opened a tech support case with Microsoft about this. After a week or so of back & forth, they said they had reproduced the issue in their environment and understand the cause (a timing issue deep inside the ASP.NET internals) - but that there is no resolution that they're aware of. I complained that the HTTP module is Microsoft code, but they said that this code is under "FAST PUBLISH" terms - intended to help & advise customers; yet not warranted.
Ah well. We now just manually recycle the app pool after making a web.config change.

WCF service deployed to Azure

I have create a WCF Service Web Role project.I can consume the service locally. But I am having issues trying to deploy the service on the azure cloud. After starting the webrole it justs kepps going in a loop where it init then stops. I have not made any changes to the default WebRoleclass that was added automatically. Can anybody point me to some samples or examples of WCF being deployed to azure
The behaviour you're seeing occurs when the instance errors in the OnStart or Run. The usual diagnostics error trapping hasn't had a chance to start yet so this is a difficult problem to debug. You might try adding error trapping inside this functions that writes the error details out to either a blob or a queue so that you can see what is actually happening.
Having said that, with code that works in the dev fabric, but continues to cycle when deployed to live, the first thing to check is that all of the references have the appropriate "Copy Local" property set. Anything that is part of the framework or Microsoft.WindowsAzure.ServiceRuntime will need to have Copy Local to false, everything else should be set to true (third party assemblies an the like). If this is a web role and you're using MVC, you'll need to check that System.Web.Mvc has Copy Local set to true as well as this is not included as part of the standard framework deployed in Azure.
Have you looked at the Known Issues information on the WCF Azure code page? There's a patch that's needed, as well as a tweak to the service behavior. Hopefully this will help you.
I just found out the root of the problem. It was caused by one of my projects having the target platform set to x86. Seems like it does not support x86 build assemblies which can be a problem

FluentNHibernate blows up in Windows Service but not website

I've got a class library doing all my NHibernate stuff. It also handles all the mapping using Fluent NHibernate - no mapping files to deploy.
This class library is consumed by a number of apps, including a Windows Service running on my computer. Although it works fine in all my web apps, the Windows Service gets this when it tries to use NHibernate:
An invalid or incomplete configuration was used while creating a SessionFactory. Check PotentialReasons collection, and InnerException for more detail.
at FluentNHibernate.Cfg.FluentConfiguration.BuildSessionFactory()
at Kctc.NHibernate.KctcSessionFactory.get_SessionFactory() in C:\Kctc\Trunk\Kctc.NHibernate\KctcSessionFactory.cs:line 28
...more stack trace...
I have checked for an InnerException and there doesn't appear to be one. I have no idea what the PotentialReasons collection is, and Google doesn't seem to be forthcoming either.
This is my dev machine, so when I'm working on my web apps they run locally (i.e. using the web server in Visual Studio). The fact that the Windows Service and my dev web apps are running on this same machine suggest it's not to do with trust settings or what have you.
Can anyone suggest what I should try? This is one of those ones where I'm so stumped I can't even think of how to get more information about the problem.
Just a wild guess. NHibernate picks up the hibernate.cfg.xml file from the execution directory. Did you configure the execution directory of the service that it can find this file?
I've found out what the problem is. The Service did not deploy with the required NHibernate.ByteCode.LinFu.dll.
I appear to have an ongoing problem with the Visual Studio compiler not always copying indirect dependencies (i.e. dlls required by class libraries required by the app) into the output folder during the build. I should have thought of this sooner really.
Thanks for racking your brains on my behalf guys.
I bet the name of the connection string is missing from the app.config. For me that message is almost exclusively a missing connection string.
Are you targeting the same database or could it be some sort of schema mismatch between databases?
Could it be authentication issues on the service like you use windows authentication where it can't be used (or the sql authentication that doesn't work)?
It's hard to tell when there is no code, just an exception!
EDIT Are you ever using HttpContext, HostingEnvironment or anything else specific to "web"?