Pushing metadata automatically in Git - geonetwork

I want to make a connection between Geonode and Geonetwork.
I want to harvest the metadata from Geonetwork.
How to connect Geonode to Geonetwork?
please guide me

Related

How to deploy a Camunda modeled diagram into Camunda Tomcat

I am trying to set up a BPMN workflow with Camunda. For this, I already made a diagram using the Camunda modeler. Now I want to open this BPMN diagram in Camunda. Camunda's Tomcat is installed and running, but I can't manage to upload/ find the diagram in Camunda's Tomcat. I am currently trying this on my local machine.
Anyone who knows how to get a BPMN diagram into Camunda's Tomcat?
In addition to the ways to deploy described by #MuffinMICHI you can also deploy your diagram via the REST API. You just make POST request to /engine-rest/deployment/create
You set Content-Type to:
application/x-www-form-urlencoded
You set these parameters:
deployment-name: <SOME NAME>
deployment-source: <SOME NAME>
data: <UPLOAD THE DIAGRAM HERE>
diagram (optional): <UPLOAD IMAGE FOR DIAGRAM>
There are two ways how you can upload your diagram to your BPMN engine.
In the Camunda Modeler, there is a little upwards-pointing arrow in the menu bar. There you can specify where your engine is running and
upload the diagram directly from the modeler.
https://docs.camunda.org/get-started/quick-start/service-task/
If you also have some JavaDelegate-classes you want to deploy with
your diagram, you can pack all these things in a WAR-file and put it
in the webapps-folder of your Tomcat which will then
automatically deploy your file.
https://docs.camunda.org/get-started/java-process-app/service-task/
The provided links guide you to the official Camunda documentation where all these things are explained in detail.
a) You can deploy directly from the modeler to the server.
https://docs.camunda.org/get-started/quick-start/deploy/
In the latest release the feature has improved further:
https://blog.camunda.com/post/2019/10/camunda-modeler-3.4.0-released/
On a local setup use rest endpoint http://localhost:8080/engine-rest if using on of the prepacked distributions or http://localhost:8080/rest if using Spring boot.
b) Process and decisions models (bpmn, dmn) can be auto-deployed. For instance placing the files into the src/main/resources folder (on a default Spring boot setup) will auto-deploy during startup.
c) There are other auto-deploy configuration options: https://docs.camunda.org/manual/latest/user-guide/spring-framework-integration/deployment/
d) You can use the REST-API, for instance with Postman to deploy.
https://docs.camunda.org/manual/latest/reference/rest/deployment/post-deployment/
Examples:
https://github.com/rob2universe/camunda-rest-postman
https://forum.camunda.org/t/process-deployment-to-rest-api-through-postman/10630
Deploy Camunda Process:-
https://docs.camunda.org/get-started/quick-start/deploy/
you can also use the play button to deploy if you are deploying the process for the first time.
camunda-spring-boot-starter is configured to use the SpringProcessEngineConfiguration auto deployment feature by default.
https://docs.camunda.org/manual/7.9/user-guide/spring-boot-integration/process-applications/

how to connect to document content server or repository using opencmis

I tried to connect to Documentum content server repository using cmis API, but I am not able to connect.
I have Documentum content server & web top application, Now I just want to connect to repository, and I need repository session.
How to connect to documentum repository using CMIS API?
I tried to use following code, but its not working because Its a code snipet which I used for connecting Alfresco repository, and I just modified same with Documentum server IP.
So any sample code will be really helpful, At list If I can get repository session object, It would be great.
SessionFactory factory = SessionFactoryImpl.newInstance();
Map<String, String> parameter = new HashMap<String, String>();
// user credentials
parameter.put(SessionParameter.USER, "user");
parameter.put(SessionParameter.PASSWORD, "pass");
// Uncomment for Atom Pub binding
parameter.put(SessionParameter.ATOMPUB_URL, "http://localhost:8080//cmis/atom");
// Uncomment for Atom Pub binding
parameter.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value());
parameter.put(SessionParameter.AUTHENTICATION_PROVIDER_CLASS,
CmisBindingFactory.NTLM_AUTHENTICATION_PROVIDER);
List<Repository> repositories = factory.getRepositories(parameter);
sourceSession = repositories.get(0).createSession();
With above code I am not able to get repository session, so please let me know if I am doing anything wrong, or please share any other sample code if you have.
I used above code to get Alfresco repository session, But I am not familiar with documentum, So I tried with modifying same alfresco cmis code.
First of all, avoid NTLM! Even if you get it working at some point, you will run into strange issues later.
This document is a bit outdated, but maybe it contains a few clues for you: http://www.jouvinio.net/wiki/images/a/a4/Documentum_cmis_6.7_deployment.pdf

How to deploy an atg project in weblogic?

I created a simple project using ATG 10.2 .I want to know how to deploy it in weblogic. Please provide detailed procedure with screenshots,if possible.
To provide a 'detailed' procedure is beyond the scope of what StackOverflow is trying to provide. That said, if you have an understanding of the Weblogic Management Console you should be able to follow these steps to setup your initial deployment:
Create a Server
1.1 Specify a server name (eg. commerce) and the port number this server will run on (eg. 8180). Select it as a 'Stand-alone server'.
1.2 Once created go to Configuration > Server Start for the newly created server and modify the 'Arguments' block and include the following setings (assuming you are running windows, for Unix update your own paths)
-Datg.dynamo.data-dir=c:\ATG-Data -Datg.dynamo.server.name=commerce -d64 -XX:ParallelGCThreads=8 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Xms1152m -Xmx2048m -XX:NewSize=128m -XX:MaxNewSize=256m -XX:PermSize=128m -XX:MaxPermSize=256m
1.3 Save your Server
Create DataSources
2.1 In the Console click on 'Services > Data Sources'
2.2 Create 'New' datasources for each of your connections. As a minimum you will need connections for ATGSwitchingDS_A, ATGSwitchingDS_B (Assuming you are doing switching datasources) and ATGProductionDS. These names should match your JNDI names in your property files. Remember to specify the 'commerce' server as the target for each of the datasources.
Create Deployment
3.1 Assuming you've already built your EAR (eg. ATGProduction.ear) and it is available in c:\deployments you need to create a deployment in Weblogic. You need to create the deployment in the console and specify the target as 'commerce'. Once done you need to also 'start serving requests' on the deployment.
Start Server
You should now be able to see your server running on port 8180 with the log files being written to c:\ATG-Data\servers\commerce\logs.
If after this things aren't running, post specific questions about your issues and someone here might be able to help you.

Shared config file for all mobilefirst server adapters

Can i get config for all adapters at one place ? For example, i need store connection strings, httpserver addresses that needed across my mobilefirst server.
Mobilefirst version 8.0.
Thank you for advance!
In MobileFirst Foundation 8.0 you have the following options:
If using JavaScript adapters:
Edit the connectivity settings from the MobileFirst Console,
Or create a config file and use Maven commands (or the MobileFirst CLI in an upcoming CLI update), or other tools, to push the file to each adapter that requires that same set of connection settings.
Using this method there is no downtime to the server.
See the "Pull and Push Configurations" topic here: https://mobilefirstplatform.ibmcloud.com/tutorials/en/foundation/8.0/adapters/javascript-adapters/
Customized adapter properties can be shared using the adapter configuration file found in the Configuration files tab.
To do so, use the pull and push commands described below. For the properties to be shared, you need to change the default values given to the properties.
Replace the DmfpfConfigFile placeholder with the actual value, for example: config.json. Then, run the command from the root folder of the adapter Maven project:
To pull the configurations file - mvn adapter:configpull -DmfpfConfigFile=<path to a file that will store the configuration>.
To push the configurations file - mvn adapter:configpush -DmfpfConfigFile=<path to the file that stores the configuration>.
If using Java adapters,
You can add JNDI properties to the server.xml of your application server, and using the configurationAPI (getServerJNDIProperty) you can read those properties in each of your adapters. However note that by using server.xml this will incur a downtime whenever you will want to update your list of connection properties.

I am trying OpenShift origin, I cannot create application

I am trying OO on a RHEL Atomic Host. I spun up OO master as a container following this guide https://docs.openshift.org/latest/getting_started/administrators.html
After attaching a shell to the Master Container, I cannot deploy an app.
# oc new-app openshift/deployment-example
error: can't look up Docker image "openshift/deployment-example": Internal error occurred: Get https://registry-1.docker.io/v2/: net/htt p: request canceled while waiting for connection error: no match for "openshift/deployment-example"
The 'oc new-app' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Templates in the current project or the 'openshift' project
4. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to point to an image that does not exist yet.
See 'oc new-app -h' for examples.
The host needs proxy to access Internet. I have configured proxy in /etc/sysconfig/docker and that is how I could pull the origin image in the same place.
I have tried setting proxy for master and node with luck
https://docs.openshift.org/latest/install_config/http_proxies.html
It is possible that your proxy is terminating the connection. you can test by creating an internal registry, push image to that and then use
"oc new-app your.internal.registry/openshift/deployment-example"