I am using Apache Jackrabbit Webdav library for svn checkin operation.
I am using MAKActivity method to start the transaction.
But I dont know how to add commit message. Following is the code
RandomStringGenerator rsg = new RandomStringGenerator(32);
String random = rsg.nextString();
String url = getRepoAddress() + "!svn/act/" + random;
MkActivityMethod activityMethod = null;
try
{
activityMethod = new MkActivityMethod(url);
client.executeMethod(activityMethod);
}
catch(Exception e)
{
e.printStackTrace();
}
This code executes successfully but I dont unserstand how to write log message in this.
Any help will be appreciable.
First of all I'd suggest that you not reinvent the wheel that's already been done twice now and instead using a library that knows Subversion's DAV based protocol. Note that while Subversion is mostly WebDAV and DeltaV compatible, it does have non-standard extensions.
To that end I'd point you to JavaHL or SVNKit. JavaHL comes with Subversion and uses JNI to access the Subversion libraries. SVNKit is an independent Java only implementation and includes a couple different interfaces, including one that is JavaHL compatible. If the use of the native libraries by JavaHL doesn't present a problem for you I'd recommend this since you'll have the benefit of using the same libraries as nearly every Subversion client.
If however your goal is to understand how Subversion implements the protocol on top of WebDAV and DeltaV then perhaps you want to just use a generic WebDAV and DeltaV client library to help. I'd recommend that you refer to these documents that describe how WebDAV and DeltaV are implemented within Subversion.
One thing you might want to understand is that as of Subversion 1.7 we support what we refer to as HTTPv2. HTTPv2 varies somewhat from the DeltaV standard in particular. Instead of using MKACTIVITY to start a transaction on the server we use a POST. Which has a body with a syntax something like this:
(create-txn)
or
( create-txn-with-props (PROPNAME PROPVAL [PROPNAME PROPVAL ...])
The older style which you must use with MKACTIVITY (and can use with the POST if you use create-txn instead of create-txn-with-props) is to use a PROPPATCH on the transaction or the working baseline URL.
The working baseline URL is used with MKACTIVITY and the transaction URL is used with the POST.
When using MKACTIVITY you have to use a PROPFIND on the root URL to get the version-controlled-configuration. Then do a CHECKOUT against the URL you received in response to that PROPFIND providing the activity-set href as the URL you used with MKACTIVITY. You'll get the working baseline URL back as the Location header from the CHECKOUT request. Which you can then use to issue a PROPPATCH to apply the revision properties.
When using POST, you get the transaction stub from the headers in the OPTIONS request response, the transaction name from the SVN-Txn-Name header in the response to the POST, and execute a PROPPATCH against the $transaction_stub/$transaction_name URL.
Probably the best ways to figure all this out is to setup a Subversion server and do some commits while running Subversion through a debugging proxy server such as Charles. You can force the traffic through the proxy on the svn command line with these options --config-option servers:global:http-proxy-port=8888 --config-option servers:global:http-proxy-host=127.0.0.1. If you want to see the old protocol you can include SVNAdvertiseV2Protocol off in your http configuration.
In order to support the broadest range of Subversion servers you need to implement the HTTPv1 protocol, which has more round trips and is more difficult to implement. If you want to only implement HTTPv2 you'll be limited to supporting Subversion servers newer than 1.7. In order to use HTTPv2 with maximum compatibility you'll have to detect the presence from the OPTIONS response.
As you can see it gets rather complicated so it's really not worth trying to write your own client if all you want to do is implement some basic functionality.
So you are trying to do a SVN commit using WebDAV via the SVNAutoversioning on directive?
http://svnbook.red-bean.com/en/1.7/svn.webdav.autoversioning.html
AFAIK, the spec does not allow you to provide a commit message and the server will always create one for you. Perhaps you want to look at the SVNKit library if you are trying to create SVN transactions via Java.
http://svnkit.com
Related
Appendix A of The Definitive Guide to Jython describes downloading SetupTools for use with Jython.
https://jython.readthedocs.io/en/latest/appendixA/
This indicates to me that it should be possible to download and use SetupTools from within a Jython automation script in Maximo (v7.6 in my case). The book points us to the following url to copy a Jython script that will do this:
http://peak.telecommunity.com/dist/ez_setup.py
I add one line to the above script to call the function "use_setuptools":
use_setuptools()
Then I create a push button on a Maximo application and associate the aforementioned script with the button press I get the following error:
System Message BMXAA7837E - An error occured that prevented the
EZ_SETUP script for the EZ_SETUP launch point from running.
urllib2.HTTPError: HTTP Error 403: SSL is required in at line
number 280
The stack trace points to the following line in the function "download_setuptools" which is called by "use_setuptools":
src = urllib2.urlopen(url)
This appears to be because the url requested, in my case:
http://pypi.python.org/packages/2.5/s/setuptools/setuptools-0.6c11-py2.5.egg
...is redirected a few times before arriving at the following url:
https://files.pythonhosted.org/packages/98/d3/ed3bc7e3f4b143092862dab99d984261ac4be6a40d4fb1e66d2a10e9ea99/setuptools-0.6c11-py2.5.egg
Note the url uses HTTPS not HTTP. The following indicates why this may be so:
https://sourceforge.net/p/pypi/support-requests/300/
The jython.jar included with Maximo does not include the ssl module so we could either:
Download the ssl module manually and copy it to the correct location on the server.
Download the appropriate egg file manually over HTTPS and copy it to the correct location on the server.
Bypass the problem by creating a mirror for the file we're looking for that is accessible over HTTP and use that url in the code.
Whilst these are feasible I'd prefer to modify the code to ignore the SSL certificate if possible, however all the workarounds on StackOverflow and elsewhere seem to require that you're able to "import ssl" in order to bypass it which rather seems to defeat the purpose.
Ideally I'm looking for a solution that modifies the code from the url provided above to get it to work with Maximo/Jython 2.5.2 and doesn't require downloading and adding new modules or packages and all that this entails with Maximo. Bypassing or temporarily disabling ssl is fine as the code checks the hash of the downloaded .egg file. This would be my preferred solution if possible.
In my experience, automation scripting works best if you can stay "as Java as possible" and "as Maximo as possible". So, I would use the LIB_HTTPCLIENT script from the Scripting 76 Features document (the first example code, whose name is given by inference in the second bit of code) to try to download the SetupTools.
In case that document moves again, here is the LIB_HTTPCLIENT script. Note that the url variable is expected to be passed to this library script by the calling script.
from psdi.iface.router import HTTPHandler
from java.util import HashMap
from java.util import String
handler = HTTPHandler()
map = HashMap()
map.put("URL",url)
map.put("HTTPMETHOD","GET")
responseBytes = handler.invoke(map,None)
response = String(responseBytes,"utf-8")
I'm trying to create a serverless function for Kong for authentication purposes. I'm required to use a client certificate to authenticate with the remote service that we have to use. I can't seem to get this working and there appears to be no clear documentation on how to do this. I've tried pintsized/lua-resty-http, ngx.socket.tcp(), and luacurl (failed to build) without success. I'm using the newest version of Kong in an Alpine Linux container in case that matters.
What is the best way to do this? Right now I'm considering simply calling curl from within Lua as I know that works, but I was hoping for a better solution that I can do with just Lua/OpenResty.
Thanks.
UPDATE: I just wanted to add, just in case it helps, that I'm already building a new image based on the official Kong one as I had to modify the nginx configuration templates, so installing new software into the container is not an issue.
All,
Apologies for the ugly code, but it looks like a found an answer that works:
require("socket")
local currUrl= "https://some.url/"
local https = require("ssl.https")
local ltn12 = require("ltn12")
local chunks = {}
local body, code, headers, status = https.request{
mode = "client",
url = currUrl,
protocol = "tlsv1_2",
certificate = "/certs/bundle.crt",
key = "/certs/bundle.key",
verify = "none",
sink = ltn12.sink.table(chunks),
}
If someone has a better answer, I'd appreciate it, but it's hard to complain about this one. The main issue is that while this works for a GET request, I'll be wanting to do POSTs to a service in a future and I have no idea how to do it using similar code. I'd like one libary/API that can do any type of REST request.
This blog got me on the right track: http://notebook.kulchenko.com/programming/https-ssl-calls-with-lua-and-luasec
Worklight 5.0.6.1
We are having a specific requirement from our client about using a PCI Appliance from Intel (http://info.intel.com/rs/intel/images/Intel_Expressway_Tokenization_Broker.pdf) to avoid a PCI Audit for the application and server.
Therefore, the Adapter calls that have something to do with payment data would need to go through this hardware appliance before hitting the worklight server. All other adapter calls should go to the worklight server directly (to not overload the appliance).
The idea is to have two different URLs but the same worklight server in the background. It is assumed that the calls through the appliance will be transparent for the worklight server, so worklight functionality should not be impacted.
My questions around this would be:
a Worklight best-practice for having two different URLs for the same worklight server and alternating those URLs from the client for Adapter invocations (only; not direct update or anything else, since we assume this is executed native)?
is it possible to dynamically overwrite the worklight server URL that is used for an adapter invocation through JavaScript code in the client code? e.g. overwrite a specific JS function that gets/returns the worklight URL from somewhere before the WL.Client AJAX adapter invocation?
We are also looking into having a load-balancer switch the route based on a regex of the AdapterName that is being invoked or so. But it is not sure right now if that is possible and what the performance impact is.
Though possible, this is not something supported by WL. You will not be able to get help from support in case something goes wrong (and it will). You have to keep in mind that all server cookies (e.g. session id) are per domain. Therefore when you're dynamically changing server URL you will loose them. Therefore WL server will treat your request as a new session, unrelated to an old (existing) one. This is not something specific to WL, this is how HTTP works.
WL keeps server URLs in two global properties - WL.AppProp.WORKLIGHT_ROOT_URL and WL.AppProp.APP_SERVICES_URL. You can override them thus changing server URLs.
First one is used for all requests triggered by developer (init, connect, login etc). Second one is used for miscellaneous internal functionality (e.g. encrypted cache).
Once again - this is a hack, definitely not a solution. Use with caution if at all:)
How About this,if we define our own function that will call some static properties and update them ?
function changeServerUrl(serverURL) {
WL.StaticAppProps.APP_SERVICES_URL = serverURL + WL.StaticAppProps.POSTFIX_APP_SERVICES_URL;
WL.StaticAppProps.WORKLIGHT_ROOT_URL = serverURL + WL.StaticAppProps.POSTFIX_WORKLIGHT_ROOT_URL;
WL.StaticAppProps.WORKLIGHT_BASE_URL = serverURL;
}
and call it
chnageServerUrl("http://"+yourServerIP+":"+PORT);
if you dig into the worklight.js file there is a function "setWLUrl(url)" that can be use to change the serevr URL.
call it like this and its done
setWLUrl("http://"+yourServerIP+":PORT");
its kind a hack but i think it should not have anny issue since its a function within there api.
Good Luck
I am using a java raw HTTP client to connect to Shopify API (specifically, using Play Framework with the non-defualt sync driver which is actually the JDK's default driver).
My application usually manages to connect successfully and convert the temporary access token into a permanent one by calling the /admin/oauth/access_token endpoint.
However, sometimes I get this error result from the API:
Generic Error(400)
{"error":"invalid_request"}
I haven't been able to reproduce the issue with my test stores - I've tried installing a fresh store, reinstalling existing stores after uninstalling, I'm not sure why this call sometimes fail and how to debug it. The API call still continues to succeed for some stores using our application.
Some things that I am doing:
Even if the URL of the store is on a custom domain, I'm always using the https://foo.myshopfiy.com/admin/oauth/access_token URL and not the URL of the custom domain, to prevent a redirect.
I am always using an https URL and never an http one, again to prevent a redirect (we noticed a few issues with redirect with the Java HTTP client, so we aim to have zero redirects)
A thread I found about this error suggest possible problems with our SSL certificates, however I don't think this is my problem because some requests work for us, and the result of running openssl on our machine does't show any issues.
How should I proceed? Open a support ticket with Shopify?
FYI, I see that this specific problem only started yesterday on Feb 19 2013, so it might be a temporary issue.
FYI, the problem was caused by reusing a temporary access code.
Our fault - Shopify could have been more clear in their error message though.
Is it possible to enable logging in Apache Ace? If yes, How?
In the source code, i can see that the LogService is used to write messages to the log. But i am not able to locate the logs when i start the ace devserver.
The LogService is a standard compendium service, and you can use any implementation to actually record the log statements. We use the one from Apache Felix, and there are shell commands to actually retrieve log statements (hint, the command is called "log"). This implementation does not write them to disk though. Based on the specification, it would be easy to do this yourself though. A LogReader exists to read from, and you can register yourself as a LogListener.