Perform authentication to Polarion webservice with Savon - savon

I am attempting to follow the discussion here using Ruby and Savon. I am able to retrieve a session ID, but whenever I perform a request from the clients that require authentication (tracker), I receive an Authorization Failed error.
require 'Savon'
tracker_url = 'http://myserver/polarion/ws/services/TrackerWebService?wsdl'
session_url = 'http://myserver/polarion/ws/services/SessionWebService?wsdl'
# todo handle bad login credentials gracefully
session_client = Savon.client(wsdl: session_url)
response = session_client.call(:log_in, message: {user_name: 'lsimons', password: 'mypassword'})
session_id = response.header[:session_id]
puts "Session ID: #{session_id}"
tracker_client = Savon.client(wsdl: tracker_url, soap_header: {"session" => session_id}, headers: {"sessionID" => session_id})
puts "Requesting Workitem"
begin
tracker_client.call(:get_work_item_by_id, message: {project_id: 'myProject', workitem_id: 'myWorkitem'})
rescue
puts "Client call failed"
end
This code creates the following SOAP request for the tracker_client:
<?xml version="1.0" encoding="UTF-8"?>
<env:Envelope xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ins0="http://ws.polarion.com/TrackerWebService-impl" xmlns:ins1="http://ws.polarion.com/types" xmlns:ins2="http://ws.polarion.com/TrackerWebService-types" xmlns:ins3="http://ws.polarion.com/ProjectWebService-types" xmlns:tns1="http://ws.polarion.com/TrackerWebService" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<env:Header>
<session>2164640482421325916</session>
</env:Header>
<env:Body>
<tns1:getWorkItemById>
<ins0:projectId>myProject</ins0:projectId>
<ins0:workitemId>myWorkitem</ins0:workitemId>
</tns1:getWorkItemById>
</env:Body>
</env:Envelope>
However, in the forum discussion, the sessionID element occurs before the header. I didn't think this was possible with standard SOAP? Is there a way to achieve this with Savon or am I misinterpreting the forum discussion?

I faced the same problem following the same thread. This is how I made it work (by replicating the response headers of the log_in request):
tracker_client = Savon.client(
wsdl: tracker_url,
soap_header: {
"ns1:sessionID" => session_id,
:attributes! => {
"ns1:sessionID" => {
"env:actor" => "http://schemas.xmlsoap.org/soap/actor/next",
"env:mustUnderstand" => "0",
"xmlns:ns1" => "http://ws.polarion.com/session"
}
}
}
)

Old question but thought I can add some info to hopefully help somebody.
I am using lolsoap to talk to polarion. In the above resulting document, the sessionID element is wiped off any namespaces and attributes. Also the assessment is right that actor and mustUnderstand attributes seem irrelevant.
To add header properly though with all fluff, one needs to get the Nokogiri::XML::Node and dup it, then add it to the header of the doc. This is a bug in nokogiri/libxml2 that adding child elements can often break namespaces unless Node is cloned before adding [1].
In lolsoap it is done something like:
auth_header = login_response.nokogiri_doc.xpath("//*[local-name()='sessionID']")[0].dup
other_request.header.__node__ << auth_header
Please note the dup operation. header.__node__ is just the header Nokogiri::XML::Element of a random SOAP request.
The dup operation makes adding desired element into another one with all necessary namespaces and attributes properly defined.
I don't know if savon allows one to directly touch request XML but I guess it does. Thus HTH
[1] https://github.com/sparklemotion/nokogiri/issues/1200

Related

Webhook call failed. Error: Failed to parse webhook JSON response: Expect message object but got: [Chinese letters]

I'm building my own WebhookClient for dialog flow. My code is the following (using Azure Functions, similar to Firebase Functions):
module.exports = async function(context, req) {
const agent = new WebhookClient({ request: context.req, response: context.res });
function welcome(agent) {
agent.add(`Welcome to my agent!!`);
}
let intentMap = new Map();
intentMap.set("Look up person", welcome);
agent.handleRequest(intentMap);
}
I tested the query and the response payload looks like this:
{
"fulfillmentText": "Welcome to my agent!!",
"outputContexts": []
}
And the headers in the response look like this:
Transfer-Encoding: chunked
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/10.0
X-Powered-By: ASP.NET
Date: Tue, 11 Dec 2018 18:16:06 GMT
But when I test my bot in dialog flow, it returns the following:
Webhook call failed. Error: Failed to parse webhook JSON response:
Expect message object but got:
"笀ഀ਀  ∀昀甀氀昀椀氀氀洀攀渀琀吀攀砀琀∀㨀 ∀圀攀氀挀漀洀攀 琀漀 洀礀 愀最攀渀琀℀℀∀Ⰰഀ਀  ∀漀甀琀瀀甀琀䌀漀渀琀攀砀琀猀∀㨀 嬀崀ഀ਀紀".
There's Chinese symbols!? Here's a video of me testing it out in DialogFlow: https://imgur.com/yzcj0Kw
I know this should be a comment (as it isn't really an answer), but it's fairly verbose and I didn't want it to get lost in the noise.
I have the same problem using WebAPI on a local machine (using ngrok to tunnel back to Kestrel). A friend of mine has working code (he's hosting in AWS rather than Azure), so I started examining the differences between our responses. I've notice the following:
This occurs with Azure Functions and WebAPI (so it's not that)
The JSON payloads are identical (so it's not that)
Working payload isn't chunked
Working payload doesn't have a content type
As an experiment, I added this code to Startup.cs, in the Configure method:
app.Use(async (context, next) =>
{
var original = context.Response.Body;
var memory = new MemoryStream();
context.Response.Body = memory;
await next();
memory.Seek(0, SeekOrigin.Begin);
if (!context.Response.Headers.ContentLength.HasValue)
{
context.Response.Headers.ContentLength = memory.Length;
context.Response.ContentType = null;
}
await memory.CopyToAsync(original);
});
This code disables response chunking, which is now causing a new and slightly more interesting error for me in the google console:
*Webhook call failed. Error: Failed to parse webhook JSON response: com.google.gson.stream.MalformedJsonException: Unterminated object at line 1 column 94 path $.\u0000\\"\u0000f\u0000u\u0000l\u0000f\u0000i\u0000l\u0000l\u0000m\u0000e\u0000n\u0000t\u0000M\u0000e\u0000s\u0000s\u0000a\u0000g\u0000e\u0000s\u0000\\"\u0000.\
I thought this could be encoding at first, so I stashed my JSON as a string and used the various Encoding classes to convert between them, to no avail.
I fired up Postman and called my endpoint (using the same payload as Google) and I can see the whole response payload correctly - it's almost as if Google's end is terminating the stream part-way through reading...
Hopefully, this additional information will help us figure out what's going on!
Update
After some more digging and various server/lambda configs, I spotted this post here: https://github.com/googleapis/google-cloud-dotnet/issues/2258
It turns out that json.net IS the culprit! I guess it's something to do with the formatters on the way out of the pipeline. In order to prove this, I added this hard-coded response to my POST controller and it worked! :)
return new ContentResult()
{
Content = "{\"fulfillmentText\": null,\"fulfillmentMessages\": [],\"source\": null,\"payload\": {\"google\": {\"expectUserResponse\": false,\"userStorage\": null,\"richResponse\": {\"items\": [{\"simpleResponse\": {\"textToSpeech\": \"Why hello there\",\"ssml\": null,\"displayText\": \"Why hello there\"}}],\"suggestions\": null,\"linkOutSuggestion\": null}}}}",
ContentType = "application/json",
StatusCode = 200
};
Despite the HTTP header saying the charset is utf-8, that is definitely using the utf-16le character set, and then the receiving side is treating them as utf-16be. Given you're running on Azure, it sounds like there is some configuration you need to make in Azure Functions to represent the output as UTF-8 instead of using UTF-16 strings.

Programmatically provide NiFi InvokeHTTP different certificates

I have a requirement in Nifi where I have cycle through different HTTPS REST Endpoints and provide different certificates for some endpoints and different username / password for some other endpoints.
I used InvokeHTTP processor to send the requests, although URL takes an expression language, I cannot setup SSLContextService with an expression.
Alternatively, I thought on using ExecuteScript to call those Endpoints, however as listed here in StackOverflow post; I still don't know how to programmatically call an external service through a script.
Any help appreciated.
just for fun created the groovy script that calls http.
for sure you can avoid using it. and I believe InvokeHTTP processor covers almost all needs.
However.. going to call test rest service: /post at https://httpbin.org
the flow: GenerateFlowFile (generates body) -> EcecuteGroovyScript (call service)
The body generated by GenerateFlowFile : {"id":123, "txt":"aaabbbccc"}
In ExecuteGroovyScript 1.5.0 declare the CTL.ssl1 property and link it to StandardSSLContextService
and now the script:
#Grab(group='acme.groovy', module='acmehttp', version='20180301', transitive=false)
import groovyx.acme.net.AcmeHTTP
import org.apache.nifi.ssl.SSLContextService.ClientAuth
def ff=session.get()
if(!ff)return
def http
ff.write{ffIn, ffOut->
http = AcmeHTTP.post(
url: "https://httpbin.org/post", //base url
query: [aaa:"hello", bbb:"world!"], //query parameters
// send flowfile content (stream) as a body
body: ffIn,
headers:[
//assign content-type from flowfile `mime.type` attribute
"content-type":ff.'mime.type'
],
// you can declare `CTX.ssl1`, `CTX,.ssl2`,... processor properties and map them to SSLContextService
// then depending on some condition create different SSLContext
// in this case let's take `CTL.ssl1` service to create context
ssl: CTL["ssl"+1].createSSLContext(ClientAuth.WANT),
// the next commented line creates trust all ssl context:
//ssl: AcmeHTTP.getNaiveSSLContext(),
// the receiver that transfers url response stream to flowfile stream
receiver:{respStream, httpCtx-> ffOut << respStream }
)
}
//set response hesders as flow file attributes with 'http.header.' prefix
http.response.headers.each{ k,v-> ff['http.header.'+k]=v }
//status code and message
ff.'http.status.code' = http.response.code
ff.'http.status.message' = http.response.message
if( http.response.code < 400){
//transfer to success if response was ok
REL_SUCCESS << ff
}else{
//transfer to failure when response code is 400+
REL_FAILURE << ff
}

What's the easiest way to display content depending on the URL parameter value in DocPad

I'd like to check for an URL parameter and then display the confirmation message depending on it.
E.g. if I a GET request is made to /form?c=thankyou docpad shows the form with thank you message
I think there is two basic ways to do this.
look at the url on the server side (routing) and display differing content according to URL parameters
Look at the parameter on the client side using JavaScript and either inject or show a dom element (eg div) that acts as a message box.
To do this on the server side you would need to intercept incoming requests in the docpad.coffee file in the serverExtend event. Something like this:
events:
# Server Extend
# Used to add our own custom routes to the server before the docpad routes are added
serverExtend: (opts) ->
# Extract the server from the options
{server} = opts
docpad = #docpad
# As we are now running in an event,
# ensure we are using the latest copy of the docpad configuraiton
# and fetch our urls from it
latestConfig = docpad.getConfig()
oldUrls = latestConfig.templateData.site.oldUrls or []
newUrl = latestConfig.templateData.site.url
server.get "/form?c=thankyou", (req,res,next) ->
document = docpad.getCollection('documents').findOne({relativeOutPath: 'index.html'});
docpad.serveDocument({
document: document,
req: req,
res: res,
next: next,
statusCode: 200
});
Similar to an answer I gave at how to handle routes in Docpad
But I think what you are suggesting is more commonly done on the client side, so not really specific to Docpad (assumes jQuery).
if (location.search == "?c=thankyou") {
$('#message-sent').show();//show hidden div
setTimeout(function () {
$('#message-sent').fadeOut(1000);//fade it out after a period of time
}, 1000);
}
This is a similar answer I gave in the following Docpad : show error/success message on contact form
Edit
A third possibility I've just realised is setting the document to be dynamically generated on each request by setting the metadata property dynamic = true. This will also add the request object (req) to the template data passed to the page. See Docpad documentation on this http://docpad.org/docs/meta-data.
One gotcha that gets everyone with setting the page to dynamic is that you must have the docpad-plugin-cleanurls installed - or nothing will happen. Your metadata might look something like this:
---
layout: 'default'
title: 'My title'
dynamic: true
---
And perhaps on the page (html.eco):
<%if #req.url == '/?c=thankyou':%>
<h1>Got It!!!</h1>
<%end%>

How do I pass in the 'hd' option for OpenID Connect (Oauth2 Login) using the Google Ruby API Client?

The "Using Oauth 2.0 for Login" doc lists the 'hosted domain' parameter as a valid authentication parameter, but using the Google API Client for Ruby linked at the bottom I don't see how to pass it along with my request. Anyone have an example?
OK, wasn't perfect, but I just passed it to the authorization_uri attribute on the authorization object like so
client = Google::APIClient.new
client.authorization.authorization_uri(:hd => 'my_domain')
I still had trouble updating the Addressable::URI object to save the change (kept getting a "comparison of Array with Array failed" error), but this was good enough for me to use.
I couldn't get it to work using the Google::APIClient but managed to get it working using the OAuth2::Client like this
SCOPES = [
'https://www.googleapis.com/auth/userinfo.email'
].join(' ')
client ||= OAuth2::Client.new(G_API_CLIENT, G_API_SECRET, {
:site => 'https://accounts.google.com',
:authorize_url => "/o/oauth2/auth",
:token_url => "/o/oauth2/token"
})
...
redirect client.auth_code.authorize_url(:redirect_uri => redirect_uri,:scope => SCOPES,:hd => 'yourdomain.com')

Check if twitter username exists

Is there a way to check if a twitter username exists?
Without being authenticated with OAuth or the twitter basic authentication?
UPDATE 2021: This API is not available.
As of right now, you're better off using the API the signup form uses to check username availability in realtime. Requests are of the format:
https://twitter.com/users/username_available?username=whatever
And give you a JSON response with a valid key giving you a true if the username can be registered:
{"valid":false,"reason":"taken","msg":"Username has already been taken","desc":"That username has been taken. Please choose another."}
{"valid":true,"reason":"available","msg":"Available!","desc":"Available!"}
{"valid":false,"reason":"is_banned_word","msg":"Username is unavailable","desc":"The username \"root\" is unavailable. Sorry!"}
The reason this is better than checking for 404 responses is that sometimes words are reserved (like 'root' above), or a username is actually taken but for some reason the account is gone from the Twitter front end.
UPDATE
The Twitter REST API v1 is no longer active.
So use
https://api.twitter.com/1.1/users/show.json?screen_name=username
You can also use the API with username :
http://api.twitter.com/1/users/show.xml?screen_name=tarnfeld
Will give you :
<?xml version="1.0" encoding="UTF-8"?>
<user>
...................
<screen_name>tarnfeld</screen_name>
<location>Portsmouth, UK</location>
.................
</status>
</user>
Or if not exist :
<?xml version="1.0" encoding="UTF-8"?>
<hash>
<request>/1/users/show.xml?screen_name=tarnfeldezf</request>
<error>Not found</error>
</hash>
As API v1 is no longer available, here is another way to check if a twitter account exists. The page headers of a non existing account contain 404 (page not found).
function twitterAccountExists($username){
$headers = get_headers("https://twitter.com/".$username);
if(strpos($headers[0], '404') !== false ) {
return false;
} else {
return true;
}
}
Here is how it works on PHP :
$user_infos = 'http://api.twitter.com/1/users/show.xml?screen_name='.$username;
if (!#fopen($user_infos, 'r'))
{
return false;
}
return true;
This worked for me, close to what sferik has posted.
def twitter_user_exists?(user)
Twitter.user(user)
true
rescue Twitter::Error::NotFound
false
end
You can try to grab the http://twitter.com/username page and read the response to see if you get the "Sorry, that page doesn’t exist!" page.
Edit:
As #Pablo Fernandez mentioned in a comment, it will be better (faster, more reliable) to check the response header, which will be "404 not-found" if the user doesn't exist.
Using Ruby, you could install the twitter gem and then define the following method:
require 'twitter'
def user_exists?(user)
Twitter.user(user)
true
rescue Twitter::NotFound
false
end
Then, simply pass in a Twitter user name or id to your method, like so:
user_exists?("sferik") #=> true
user_exists?(7505382) #=> true
You can try:
<?php
$user = "toneid";
$contents = #file_get_contents('http://www.twitter.com/'.$user);
if (!$contents) {
// Report error
echo "Not a valid user";
} else {
// If is a valid url
echo "OK!";
}
?>
UPDATE: This API is not available since 2012.
According to the api docs you can pass an email address to the user/
show method, I would assume that if a user didn't exist you'd get back
a 404, which should allow you to determine whether or not the user
exists.
eg: http://twitter.com/users/show.xml?email=t...#example.com
result if not exist :
<?xml version="1.0" encoding="UTF-8"?>
<hash>
<request>/users/show.xml?email=tur...#example.com</request>
<error>Not found</error>
</hash