I have a Tomcat instance running an openrdf-sesame environment. By default the location of my openrdf-sesame database configuration and data is at %APPDATA%\aduna. I am trying to change where this data saves to something custom like C:\aduna. I have looked at documentation online, but it does not specify if this is defined in a configuration file somewhere or if it is an hard coded location. I also saw that RDF4J is a new replacement for openrdf-sesame? I wouldn't mind upgrading if I could achieve the result of specifying where to save my data. Any ideas?
OpenRDF Sesame is no longer maintained, it has been succeeded by the Eclipse RDF4J project. There is a migration guide available to help you figure out what to do when updating your project.
Although the Sesame project is no longer maintained, a documentation archive is available, and of course a lot of the RDF4J documentation also applies to Sesame, albeit with slightly different package names.
As for your specific question: the directory Sesame Server uses is determined by the system property info.aduna.platform.appdata.basedir. Set this property (at JVM startup, using the -D flag) to the desired location. See the archived documentation about application directory configuration for more details. As an aside: note that in RDF4J this property has been renamed (to org.eclipse.rdf4j.appdata.basedir), so if you upgrade to RDF4J, be sure to change this.
Related
I am attempting to stand up a VxWorks Workbench version 4.5.2 project into a CM system but am running into issues. I have just started learning VxWorks Workbench. I have searched a bit on how others have done this and I came across two solutions:
Make the project path relative in a sub-directory in the WindRiver home directory.
Do not version Workbench project files and just version my source code. Every client must re-create the Workbench project on their local machine.
The first solution would be ok but I have not been able to make it work. There seems to be some registry (Windows 10) or some path stored elsewhere for Workbench related workspace locations but the end result is the project is not loaded.
The second solution would be a last resort that I would prefer not to do as there are many steps to re-creating a project on a local machine making it tedious and error prone. Does anyone have experience versioning Workbench 4 projects into CM and can share possible solutions?
This can be complicated, and it very much depends on what project type you are talking about.
For DKMs, RTPs, Static and Shared Library projects, you need to version these project files:
.wrproject
.wrmakefile
.cproject
.project
Other project types will have these, but also some additional files that are required to recreate the project, for instance VIP projects also have a <projectname>.wpj file. A conclusive list is too long for this answer however.
You do not need to version the automatically generated Makefile , nor do you need to version the automatically generated build subfolders.
My advice is to store the projects along side your code. I personally prefer to store my projects outside of my workspace, and they should certainly not be stored anywhere in your Windriver installation folder.
I tend to use a structure like this:
c:\gitrepositorys\CuriousCamel\Source\
dkmProject1
.wrproject
.project
.wrmakefile
.cproject
dkm.c
dkmProject2
dkmProject3
vipProject
etc.
The above are all versioned. In terms of the actual Workbench Workspace, I tend to create it in c:\gitrepositorys\CuriousCamel\Workspace and this is explicitly not versioned - I create it fresh for each clone, and often delete and recreate when I switch branch.
Existing projects can be imported into the workspace using the import wizard (using the General->Existing Projects from Filesystem option). When you create a new project, just make sure you choose the "Create Project in external location" option, and select wherever you have chosen to store projects.
I installed Perl6 with rakudobrew and wanded to browse the installed files to see a list of hex-filenames in ~/.rakudobrew/moar-2018.08/install/share/perl6/site/sources as well as ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/.
E.g.
> ls ~/.rakudobrew/moar-2018.08/install/share/perl6/sources/
09A0291155A88760B69483D7F27D1FBD8A131A35 AAC61C0EC6F88780427830443A057030CAA33846
24DD121B5B4774C04A7084827BFAD92199756E03 C57EBB9F7A3922A4DA48EE8FCF34A4DC55942942
2ACCA56EF5582D3ED623105F00BD76D7449263F7 C712FE6969F786C9380D643DF17E85D06868219E
51E302443A2C8FF185ABC10CA1E5520EFEE885A1 FBA542C3C62C08EB82C1F4D25BE7B4696F41B923
522BE83A1D821D8844E8579B32BA04966BAB7B87 FE7156F9200E802D3DB8FA628CF91AD6B020539B
5DD1D8B49C838828E13504545C427D3D157E56EC
The files contain the source of packages but this does not feel very accessible. What is the rational for that?
In Perl 6, the mechanism for loading modules and caching their compilations is pluggable. Rakudo Perl 6 comes with two main mechanisms for this.
One is a file-system based repository, and it's used with things like -Ilib. This resolves modules simply using paths on disk. Whenever a module loaded, it first has to check that the modules sources have not changed in order to re-compile them if so. This is ideal for development, however such checks take time. Furthermore, this doesn't allow for having multiple versions of the same module available and picking the one matching the specification in the use statement. Again, ideal for development, when you just want it to use your latest changes, but less so for installation of modules from the ecosystem.
The other is an installation repository. Here, specific versions of modules are installed and precompiled. It is expected that all interactions with such a repository will be done through the API or tools using the API (for example, zef locate Some::Module). It's assumed that once a specific version of a module has been installed, then it is immutable. Thus, no checks need to be done against source, and it can go straight to loaded the compiled version of the module.
Thus, the installation repository is not intended for direct human consumption. The SHA-1s are primarily an implementation convenience; an alternative scheme could have been used in return for a bit more effort (and may well be used in the future). However, the SHA-1s do also create the appearance of something that wasn't intended for direct manipulation - which is indeed the case: editing a source file in there will have no effect in the immediate, and probably confusing effects next time the compiler is upgraded to a new version.
We are migrating the project from sitecore 6.2 to sitecore 7.1.
I am trying to install the Active Directory Package "Sitecore Active Directory 1.1 rev. 130705" after upgrade of Lucene Search.
I am getting the error
"Sitecore.Exceptions.ConfigurationException: Could not create instance of type: Sitecore.ContentSearch.LuceneProvider.Analyzers.DefaultPerFieldAnalyzer. No matching constructor was found.".
This is when I try to install the package using Installation wizard in sitecore 7.1
As the Exception Message states its an issue with Sitecore's ContentSearch. It's not relating to version of Lucene or upgrade issues. The source of the issue is an incorrect Lucene or Solr (depending on which on you are using) IndexConfiguration.config file.
Check this first
Before proceeding make sure its not human error. This error will be displayed if you have the index file in the App_Config/Includes/ more than once. Or if two or more Index Configuration files with the same xml element name.
Option 1 - Remove the file
You can either remove the offending IndexConfiguration.config from the /App_Config/Includes/ folder and update the relating Index config files to use the DefaultIndexConfiguration in the configuration XML node;
<configuration ref="contentSearch/indexConfigurations/defaultLuceneIndexConfiguration" />
Option 2 - Fix the file
The other option is to amend the custom IndexConfiguration. Most developers are familiar with creating a custom Index by copying the entire content of the config file so will copy the entire DefaultIndexconfiguration config to create a custom IndexConfiguration causing the Exception. This is not needed.
You only need a small number of the settings from the DefaultIndexconfiguration config as shown in this blog on how to create a custom IndexConfiguration.
I havn't checked out any code, but it sounds like the caller ("Sitecore Active Directory 1.1 rev. 130705") is compiled with an older version of Lucene.Net. Check the references of the AD-package and make sure you're using same version of Lucene.Net, or upgrade your AD-package to use the newer version.
I can't say how much coding is involved when upgrading the Lucene.Net related code (if you have access to it), but updates during the last year or two are mostly casing changes or get-methods transformed into real properties.
I had a similar issue and it had to do with a developer creating a backup file in the app_config\include directory.
They were updating one of the config files and in order to create a backup they just copied the file, which created another .config file (SameName-copy.config)
To fix we just renamed it to SameName-copy.config.bak-ccyymmdd
Hope this helps someone in the future with similar issue.
Note that you cannot upgrade directly from 6.2 to 7.1. You will have to go in steps, going to 6.5, 7 and then 7.1
Please don't be too harsh, because I do not grasp this entirely correctly still, but msbuild/msdeploy is giving me some headaches lately.
Hopefully someone can provide a textual aspirin of some kind? So here is what I want to do:
I have a web application project, that has multiple configurations, thus multiple web.config-transforms.
I would like to deploy this project from command line.
I would rather not want to modify its project file. (I want to be able to do this for several web applications so as least as editing as possible is much appreciated)
I would like to be able to build it only once and then deploy the different configurations from it.
So far I deployed from command line using something like this:
msbuild D:\pathToFile\DeployVariation01.csproj
/p:Configuration=Debug;
Platform=AnyCpu;
DeployOnBuild=true;
DeployTarget=MSDeployPublish;
MSDeployServiceURL="localhost";
DeployIisAppPath="DeployApp/DeployThis01";
MSDeployPublishMethod=InProc
And this performs just what I want, except it only deploys the "Debug"-Configuration.
How can I, with minimal adjustments, make it deploy my other configurations as well?
I was thinking maybe I could build a package that includes all my configurations and then deploy from that and decide "while deploying" which configuration to deploy?
Unfortuanetly I am pretty much stuck here, the approaches I have read about all seem to require some modifications to project files, is there a way around that?
UPDATE:
I am still not really where I want to be here :).
But I looked into this PackageWeb-approach (also interesting video about that here) and it seems pretty nice; I can now build a package that includes all my transforms and then deploy from that as often as I want into multiple configurations.
One thing that I dislike about this is that I have to store my password in plain text into the generated parameters file for the powershell script, does someone know a way around this, I really would rather have that being an encrypted password.
Also other approaches to solve my original problem are still appreciated.
I am working on the same problem and am taking two paths using Microsoft Web Deploy or MSDeploy which is now in version 3.0.
I first compile the project using MSBUILD using the Package target passing in system.configuration, system.packagelocation. The Package Target generates a set of package files including a {PackageName}.SetParameters.xml file. The SetParameters.xml file by default allows on-publish changes to ConnectionStrings without recompiling when using msdeploy.exe to publish the file. The publish transformation process can also be customized by adding a parameters.xml file to the process defining additional parameterized web.config settings which can be changed at deploy time.
After the initial build I use the {PackageName}.deploy.cmd file generated by MSBUILD during the Package process to deploy the package to the target website. The Package process essentially duplicates the process you are currently doing from MSBUILD in that I can publish one Build-Configuration web.config transform from one compile. The process provides a consistent deployment process that can target remote servers from a central CI environment, which is great from a purely deployment process. The PackageBuild/Deploy process is parameterized within TeamCity, requiring changes to only a few parameters to setup a new deployment.
Like you, I cannot, however, compile a single version of code and deploy to multiple servers using the process as it exists today - which is my current focus. I want to parameterize the transform in a Continuous Deployment, build-once-deploy-many pattern to Dev, QA, User Testing, Staging, and Production.
I anticipate using one of two methods:
Create a Parameters.xml file for each project defining the variable deployment parameters along with a custom {ServerName}.SetParameters.xml for each target deployment, both to be used in conjunction with msdeploy.exe.
a. I am not sure defining a parameters.xml is a flexible enough process for my needs as the current project inserts and removes a variable number of web.config settings. Implementing a parameters file incorporating all of the variables could be too complex for my taste. I would also end up creating all of the target transformations, instead of the current developers initiated process. Not ideal.
I am following up on very recent updates to VS2012 Web Tools 2012.2 which allow tying a web.config transform to the publish profiles (profile.pubxml) now stored under SolutionName/Properties/PublishProfiles in VS2012.
VS2012 release 2012.2 adds the capability to create a second transform tied to the publish profile. The resulting transform process first runs the build configuration transformation, followed by the publish transformation, i.e. Release Transform followed by TargetServer Transform. Sayed Hashimi has a great YouTube video demonstrating the entire process using MSBUILD.
What is not entirely clear is whether the second transform is supported separately from the build using MSDeploy in a Continuous Deployment, build-once-deploy-many Pattern, or if the publish transformation is only supported during a separate Package/Build for each target transformation.
Option 1 will definitely work for some environments and was my first plan for tackling a Continuous Deployment process. I would much rather use Web Transforms to accomplish the process if possible.
An outside third possibility is using one of several CodePlex commandline projects that are capable of transforming web.config using the XDT transform engine. Unfortunately, using these tools would mean splicing the results into the Build/Package MSBUILD process in order to get the resulting web.config transformation into the deployment package - something I've not yet been successful in accomplishing. Sayed Hashimi also has a PackageWeb project from 2012 that might work as well. I am hoping his more recent work replaces the need for the extra steps involved in the packageweb solution.
Let me know if you decide on a solution - as I am definitely interested.
I am asking this especially, because JBoss AS 7+ has completely changed 360 degrees, enforcing the application developer to think completely in terms of JBoss Modules. That prevents earlier classpath-hell issues etc and encourages clean modular thinking etc. Also it claims a quick startup time etc.
All that is fine BUT my major concerns are thus, please confirm if you feel the same :
JBoss insists to put the jboss-deployment-structure.xml file inside WEB-INF. This would make the WAR file not portable at all since now it contains app server specific configuration files inside it. I am worried about inter-operability.
I am still nervous about the enormous amount of XML configuration needed - Create a module directory structure for each dependency you would like to add, create a module.xml for that dependency, create a jboss-deployment-structure.xml entries for non-modules or Manifest entries for libs inside WEB-INF/lib. etc etc.
That would require enough developer time and effort being spent towards being an configuration expert or hire an expert or buy the support - a significant cost in the long run for any team and company.
There is nothing about jboss-deployment-structure.xml that makes it non-portable. Other application servers will simply ignore the file if they don't use it.
You do not need to create a module if you want to use a dependency in your application. You would only do that if you want to use a common dependency among several deployments. For example a JDBC driver library.
There is no need to create a jboss-deployment-structure.xml or add manifest entries for libraries in WEB-INF/lib. The only time you would need a jboss-deployment-structure.xml is if you want to exclude server dependencies, like log4j, or add dependencies outside the scope of your deployment that are not automatically added. There are probably some other use cases, but those are the most common.