WSO2 GREG : Multi environment Configuration - wso2-esb

I have set up 2 ESB (uat & dev) with 1 shared registry basically using the following:
http://wso2.com/library/tutorials/2010/04/sharing-registry-space-across-multiple-product-instances/
I'm having difficulty adding an uat endpoint & finding it within the esb-uat. If I used
<mount path="/_system/governance" overwrite="true">
<instanceId>RemoteGreg</instanceId>
<targetPath>/_system/governance/branches/uat</targetPath>
</mount>
Within the registry, would I expect to see a tree '_system/governance/branches/uat' or do I have to create it?
If I use registry's 'Add Endpoint', I can enter Name/Version/Address/Environment. Is the environment 'branch/uat' or just 'uat'?
If I browse the endpoint I get
/_system/governance/trunk/endpoints/ep-<name-address data>
Its referencing trunk, not a branch, & 'uat' is no where to be seen. I'm obviously missing something pretty basic here. Is there an example of data entry for shared registry?

Related

Qlik Sense: how to specify path in Google Drive?

I have a Google drive account divided into some folders (say, Folder1, Folder2, etc.), with some subfolders in it.
I successfully managed to connect my Qlik Sense app to it.
I need to make it look for files only in a given subfolder.
At the moment, I read as follows ([...] is the location)
(URL IS [[...]connectorID=GoogleDriveConnector&table=ListSpreadsheets&appID=], qvx);
It works and reloads successfully, but I need it to filter the Spreadsheets properly. How could I get what I need?
To connect to Google Drive in fact you use web connector. Once web connector is installed it can be initialized as service or manually from its folder.
Once it i installed (recent version can be downloaded from https://qliksupport.force.com/apex/QS_Home_Page but it seems that you've got it as Google Drive is part of it ) it is much nicer to configure connection to online drives there.
You just go to http://localhost:5555/web and generate ready code.
In my implementation I used following options step by step to get data which I wanted:
1) CanAuthenticate to generate permanent token
2) ListSpreadsheets
3) ListWorksheets
4) GetWorksheet
You can't just specify path. But it's possible to retrieve the path from QWC services. Please use algorithm like that:
Use tables like ListFiles/ListWorksheets
Iter through every row with 'for' cycle:
FOR i=0 to (NoOfRows('Google_ListWorksheets')-1);
Let vWorksheetKey = Peek('worksheetKey', $(i), 'Google_ListWorksheets');
Let vTitle = left(Peek('title', $(i), 'Google_ListWorksheets'),3);
Using 'if' statement find desired folder id/worksheet key by its name (stored in vTitle variable) and use it:
load * FROM [$(vQwcConnectionName)]
(URL IS [http://localhost:5555/data?connectorID=GoogleDriveConnector&table=GetWorksheet&worksheetKey=$(vWorksheetKey)&appID=], qvx);
At the end you will get your files by their location.

How to set Neo4J config keys in gremlin-scala?

When running a Neo4J database server standalone (on Ubuntu 14.04), configuration options are set for the global installation in etc/neo4j/neo4j.conf or possibly $NEO4J_HOME/conf/neo4j.conf.
However, when instantiating a Neo4j database from Java or Scala using Apache's Neo4jGraph class (org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph), there is no global installation, and the constructor does not (as far as I can tell) look for any configuration files.
In particular, when running the test suite for my application, I end up with many simultaneous instances of Neo4jGraph, which ends up throwing a java.net.BindException: Address already in use because all of these instances are trying to communicate over a small range of ports for online backup, which I don't actually need. These channels are set with config options dbms.backup.address (default value: 127.0.0.1:6362-6372) and dbms.backup.enabled (default value: true).
My problem would be solved by setting dbms.backup.enabled to false, or expanding the port range.
Things that have not worked:
Creating /etc/neo4j/neo4j.conf containing the line dbms.backup.enabled=false.
Creating the same file in my project's src/main/resources directory.
Creating the same file in src/main/resources/neo4j.
Manually setting the configuration property inside the Scala code:
val db = new Neo4jGraph(dataDirectory)
db.configuration.addProperty("dbms.backup.enabled",false)
or
db.configuration.addProperty("neo4j.conf.dbms.backup.enabled",false)
or
db.configuration.addProperty("gremlin.neo4j.conf.dbms.backup.enabled",false)
How should I go about setting this property?
Neo4jGraph configuration through TinkerPop is accomplished by a pass-through of configuration keys. In TinkerPop 3.x, that would mean that all Neo4j keys prefixed with gremlin.neo4j.conf that are provided via Configuration object to Neo4jGraph.open() or GraphFactory.open() will be passed down directly to the Neo4j instance. You can see examples of this here in the TinkerPop documentation on high availability configuration.
In TinkerPop 2.x, the same approach was taken however the key prefix was instead blueprints.neo4j.conf.* as discussed here.
Manipulating db.configuration after the database connection had already been opened was definitely futile.
stephen mallette's answer was on the right track, but this particular configuration doesn't appear to pass through in the way his linked example does. There is a naming mismatch between the configuration keys expected in neo4j.conf and those expected in org.neo4j.backup.OnlineBackupKernelExtension. Instead of dbms.backup.address and dbms.backup.enabled, that class looks for config keys online_backup_server and online_backup_enabled.
I was not able to get these keys passed down to the underlying Neo4jGraphAPI instance correctly. What I had to do, instead, was the following:
import org.neo4j.tinkerpop.api.impl.Neo4jFactoryImpl
import scala.collection.JavaConverters._
val factory = new Neo4jFactoryImpl()
val config = Map(
"online_backup_enabled" -> "true",
"online_backup_server" -> "0.0.0.0:6350-6359"
).asJava
val db = Neo4jGraph.open(factory.newGraphDatabase(dataDirectory,config))
With this initialization, the instance correctly listened for backups on port 6350; changing "true" to "false" disabled backup listening.
Using Neo4j 3.0.0 the following disables port listening for me (Java code)
import org.apache.commons.configuration.BaseConfiguration;
import org.apache.tinkerpop.gremlin.neo4j.structure.Neo4jGraph;
BaseConfiguration conf = new BaseConfiguration();
conf.setProperty(Neo4jGraph.CONFIG_DIRECTORY, "/path/to/db");
conf.setProperty(Neo4jGraph.CONFIG_CONF + "." + "dbms.backup.enabled", "false");
graph = Neo4jGraph.open(config);

Programmatically modify web.config for a self-hosted service (not via changes to applicationHost.config)

I'm stuck on something I should probably move on from but it's driving me nuts...
I can programmatically update SSL certificate info for a self-hosted WCF service using:
Dim config As Microsoft.Web.Administration.Configuration = _
serverManager.GetApplicationHostConfiguration
site.Bindings.Clear()
Dim binding = site.Bindings.Add(ipport, cert.GetCertHash, store.Name)
serverManager.CommitChanges()
and can also change sections of my local web.config using a known file path starting with:
Dim cfg As System.Configuration.Configuration = _
System.Web.Configuration.WebConfigurationManager.OpenMappedWebConfiguration(ConfigFileMap, target)
but if I try to drill down to system.webServer/security/access using:
Dim accessSection = cfg.GetSection("system.webServer/security")
I get Nothing/null and further digging helpfully produces this status for the section "System.Configuration.IgnoreSection", which apparently indicates that System.Configuration doesn't want to play nice with that specific piece - even though it's not denied access in applicationHost as near as I can tell.
On the other hand, if I try to use Microsoft.Web.Administration I can only figure out how to make the change to applicationHost.config, not to the local web.config.
The only thing that seems to want to put the client certificate requirement (sslFlags setting) in the local web.config is IIS Manager (which also doesn't show the setting correctly if it is located in the applicationHost.config)
Obviously there are all sorts of ways to do this but I can't believe there isn't a simple dot net way (other than editing the xml). Does anyone know what the heck I am doing wrong?
Doh!!! Apparently I need to learn to read the manual (although MS doesn't make it easy).
You can simply do:
Using serverManager As New ServerManager
Dim config As Microsoft.Web.Administration.Configuration = _
serverManager.GetWebConfiguration("site name", "/application")
Dim accessSection As Microsoft.Web.Administration.ConfigurationSection = _
config.GetSection("system.webServer/security/access")
accessSection("sslFlags") = "Ssl,SslRequireCert"
serverManager.CommitChanges()
End Using
Retroactively obvious link: Relevant StackOverflow Question

Adding a TFS server group to access levels via command line

I am creating a group of users within TFS 2013 and I want to add them to the none default access level (ex. the full access group) but I noticed I am only able to do this through the web interface by adding a TFS Group under that certain level. I am wondering if there is a way to do this via the developer tool (command line) as everything I am doing is being done in a batch script.
Any input would be appreciated. Thanks!
Create 3 TFS server groups; add these groups to the different access levels (e.g. TFS_ACCESS_LEVEL_(NONE|STANDARD|FULL)). Now use the TFSSecurity commandline tool to add groups to these existing and mapped groups(tfssecurity /g+ TFS_ACCESS_LEVEL_NONE GroupYouWantToHaveThisAccessLevel). There is no other way to directly add people to the access levels, except probably through the Object Model using C#.
For the record, tfssecurity may require the URI, which can be obtained via API. This is easy to do in Powershell, here is how to create a TFS group
[psobject] $tfs = get-tfs -serverName $collection
$projectUri = ($tfs.CSS.ListAllProjects() | where { $_.Name -eq $project }).Uri
& $TFSSecurity /gc $projectUri $groupName $groupDescription /collection:$collection
Full script at TfsSecurity wrapper.

Using the TFS API, filetypes with extensions like .svnExe get ignored

I'm working on a tool which migrates from SVN to TFS using the TFS API.
workspace.CheckIn(
pendingChanges,
currentUser.TfsUser,
set.LogMessage + " on " + String.Format("{0:d/M/yyyy HH:mm:ss}", set.TimeStamp) + " by " + currentUser.SvnUser,
(CheckinNote)null,
(WorkItemCheckinInfo[])null,
(PolicyOverrideInfo)null
);
This is the way i check my revision in, but sometimes it ignores files like .svnExe, or other "unknown" file types.
Is there a way to check ALL filetypes in TFS?
There are two possibilities that I can think of:
Possibility 1: Something is causing the PendAdd() to fail.
For example, if the path already exists in Version Control, you have to use a PendEdit() instead.
To diagnose this possibility, you should subscribe to the VersionControlServer.NonFatalError event.
Possibility 2: You could have a corrupt workspace cache
You can refresh the cache by calling Workstation.Current.EnsureUpdateWorkspaceInfoCache() or by following the steps in this answer (run tf workspaces /collection:http://yourserver:8080/tfs/DefaultCollection, or delete the directories manually).