I am facing new problem in worklight that, when I am giving package name like bha.testing. The adapter is giving error that bha is not defined and throws error.
If I will give name like com.testing and change my adapter accordingly and run it, It will running fine.
So, I have question that it is compulsory in worklight to give package name that is starts with com.
I believe you can use either com, net or org for package names.
This is due to a limitation in the Rhino engine version used within Worklight.
Related
I'm a beginner in RavenDB, but I can't seem to get started. I'm getting stuck at loading an entity in C#.
I am getting a null exception error.
Below is my screenshot for the exception:
And in my RavenDB studio, it looks like this:
So, I'm totally stuck now.
I am very sure i have done everything else right.
The client has connected to the server with correct Url, the DefaultDatabase is correct, and the session.Load parameter is correct id
Hope someone can help :-)
Make sure that versions of server package and client nuget match. In your code, you use Url and DefaultDatabase for DocumentStore, both of which were changed a long time ago (May 2017) to Urls and Database.
It is very likely that you use an outdated client package. Install the client package for matching version of RavenDB using Package Manager Console in Visual Studio using a command like this:
Install-Package RavenDB.Client -Version 4.0.0-nightly-20180123-0500 -Source https://www.myget.org/F/ravendb/api/v3/index.json
This command is for the latest nightly version, you want to use -Version matching the server you are running.
Find the appropriate version here: https://ravendb.net/download
I'm using the trial version of Mule Standalone EE server (checked the error message). I can run my project within Anypoint Studio, but when I deploy to Mule Standalone, it fails with the error - Invalid content was found starting with element 'data-mapper:config'
Anyone has any idea on this error?
I believe even I use Tomcat server, I would end up with same error or would it be using any other run time.
The Datamapper is not included in 3.7 anymore. So if you use it you must use the latest 3.6 runtime.
But still when deploying through Anypoint Studio, it should still work normally.
It says in 3.7 datamapper was replaced by another enterprise feature called DataWeave.
As per this reference :- http://forum.mulesoft.org/mulesoft/topics/developing-applications-using-mule-soft-community-ide-and-enterprise-ide-and-deploying-in-community-standalone-and-enterprise-standalone
also here :- https://developer.mulesoft.com/docs/display/EARLYACCESS/Including+the+DataMapper+Plugin
Thanks Anirban for the response. Found the resolution in second link you provided.
However the second link informs us to download the plugin zip, which I couldnt' find anywhere.
Instead, I searched in the Anypoint studio files and found data-mapper-plugin folder which I copied to MuleServer\Plugins location. And it worked.
We're using Worklight 6.1.0.0 / WebSphere 8.0.0.2 (ND/aix).
This seemed pretty close to my question too, but for version 6.0.
I've successfully done uninstall/install to our worklight console war package. However, there is some extra work on re-deploying adapters and such. I was looking for a way to just update the console. Among the ant tasks there is a target 'minimal-update', which sounds like what I'm looking for (is it?). However when all other pieces fell into place, I have an error for mapping the datasources:
ADMA0007E: A validation error occurred in task Mapping resource references to resources. The Java Naming and Directory Interface (JNDI) name is not specified for resource reference jdbc/WorklightDS in module Worklight with EJB name .
Contents of the 'minimal-update' task is pretty much the same as for 'install'.
I tried that as update from websphere admin console (but i should use the ant task - right?), that gave me a wizard screen to map jdbc/WorklightDS from package to jdbc/WorklightDS on server. This left me wondering how could I tell this using the ant task.
The ant target minimal-update of the sample configuration files documented at http://pic.dhe.ibm.com/infocenter/wrklight/v6r1m0/topic/com.ibm.worklight.deploy.doc/devref/c_ant_tasks_sample_config_files.html is meant to update a WAR file already deployed (and not uninstalled). In particular, on WAS, it assumes that the JNDI datasources are in place.
If you have uninstalled the WAR file, you should use the target install instead, provided that your databases were created for Worklight 6.1.
If they were created for a previous version of worklight you must upgrade their schema as well running the target 'databases' (and if it's a production installation, you might want to read all the steps in detail at http://pic.dhe.ibm.com/infocenter/wrklight/v6r1m0/topic/com.ibm.worklight.upgrade.doc/devenv/c_upgrade_to_srvr610_in_production_env.html )
I'm starting out with mule and I noticed that the mflow files tend to get rather large and even if you use the visual view in Mule Studio it's hard to take it all in. I read somewhere that you could put each flow in a different file and then all the flows get deployed together and can call each other.
The problem now is I created my own custom transformer that I want to use in two different flows. But if I declare a global custom transformer in each file I get an error saying that the name already exists.
So now I tried and placed the custom transformer in its own mflow file and it works in runtime but the problem is Mule Studio doesn't seem to understand it at "compile time" and my mflow files are riddled with errors stating "Reference to unknown global element". How can I import global elements from one mflow file to another so that Mule Studio stops complaining.
Maybe this isn't the correct way to do it at all. If so I'd be happy to know how to achieve my goal in any other way.
Thanks in advance
This was a known issue with Studio whereby it doesn't recognise global elements in other config files but still runs the application fine: https://www.mulesoft.org/jira/browse/STUDIO-1881
This should be fixed in version 3.4 of Studio. What version of Studio are you using?
And yes, centralising reusable config elements is a common approach. More info on sustainable development with Mule here: http://www.mulesoft.org/documentation/display/current/Team+Development+with+Mule
Global Elements are accessible among all the mule flows, since Mule Studio 3.4
This is really annoying: my java web-application deployed on Windows Tomcat runs perfectly. When application is deployed to Linux HSQLDB starts throwing exeptions about bad SQL grammar and syntax of elementary SQL statements. Like "DROP TABLE Test IF EXISTS" "IF EXISTS" is an error or "double" type is not supported. I tested with hsql 2.1.0 and hsql 1.8.1 - same errors.
My brain is in real stackoverflow. Am I struggling with a known issue?
P.S.
After further investigation it shows that my web-application of Linux suddenly switches to DBCP of Tomcat instead of using DBCP in the WEB-INF/lib
HSQLDB does not run differently in Linux. It is possible that an earlier version Jar is in Tomcat's classpath in Linux. In this situation, classes from the early jar may be picked up.
It is possible to add a test to your app to detect the existence of early version Jars.
For example DatabaseMetaData#getDriverVersion() will report the version of the jar.
I appologize for the question. The cause of the problem had nothing to do with HSQL database. The datasource was created using Spring and I was using th same name for teh datasource bean in two different places.