return user photo url via SCIM in UnboundID - ldap

In the SCIM core schema there is a simple multivalued attribute "photos" defined to hold the urls of a user's photos.
In the UnboundID Data Store config directory the scim-resources.xml file has the following commented out under the User resource:
<!-- Mapping must be defined to use this attribute
<attribute name="photos" schema="urn:scim:schemas:core:1.0"
readOnly="false" required="false">
<description>URL of photos of the User</description>
<simpleMultiValued childName="photo" dataType="string">
<canonicalValue name="photo"/>
<canonicalValue name="thumbnail"/>
</simpleMultiValued>
</attribute>
-->
Further down in the spec is an example output:
"photos": [
{
"value": "https://photos.example.com/profilephoto/72930000000Ccne/F",
"type": "photo"
},
{
"value": "https://photos.example.com/profilephoto/72930000000Ccne/T",
"type": "thumbnail"
}
],
I have User entries with the jpegPhoto attribute populated. Questions:
Does UnboundID already have an endpoint defined to access these
photos? I don't want just the encoded binary string value of jpegPhoto
If such an endpoint exists (or I create one), do I then need to write a transformation class and reference it in a <subMapping> child element of the <canonicalValue> elements?
If how to do this is documented somewhere I haven't been able to find it.
Any guidance appreciated.
Grant

Since the SCIM photos attribute refers to an array of external URLs to the photos, you could create a Data Store virtual attribute which maps in SCIM to an array of URLs that reference a hosted servlet to retrieve the photo(s). There is no existing server endpoint for returning jpegPhoto attributes from an ldap entry and you've said you don't want the base64-encoded binary data via SCIM.
An HTTP Servlet Extension which returns photos would ideally accept the same credentials as the SCIM user for authentication and perform an LDAP search as the SCIM user that will honor ACI access control for the jpegPhoto attribute, e.g.
GET https://server:8443/photosEndpoint/{entryUUID}[/attribute-option]
Authorization: <scim user credentials>
Since jpegPhoto is a multi-value attribute, if there is one (or the first, if many) jpegPhoto attributes, this can return an img/jpeg content-type entity. It looks like you're attempting to select from multiple photos using a qualifier, e.g. /F for full size? and /T for thumbnail but there is no way to tell multi-values attribute values apart in LDAP without an attribute option, e,g,
jpegPhoto returned via /photosEndpoint/{entryUUID}
jpegPhoto;size=fullsize returned via /photosEndpoint/{entryUUID}[/fullsize | /F]
jpegPhoto;size=thumbnail returned via /photosEndpoint/{entryUUID}[/thumbnail | /T]
The servlet could also be written to handle multiple photos by returning them in a multi-part MIME response with a jpegPhoto per part. The part names would include the attribute options, if available. One downside is this kind response would not render easily in a browser.
Overall this is a nice-to-have idea but some amount of work in practice. UnboundID support might be able to help.

As a starting point I wrote a simple servlet to return the first value of jpegPhoto (if present) as an image/png via an LDAP query against the uid. I then wrote a simple transform class to return the relevant photo URL based on the uid:
import com.unboundid.asn1.ASN1OctetString;
import com.unboundid.scim.schema.AttributeDescriptor;
import com.unboundid.scim.sdk.SCIMAttributeValue;
import com.unboundid.util.ByteString;
public class PhotoTransform extends com.unboundid.scim.ldap.Transformation {
#Override
public String toLDAPFilterValue(String scimFilterValue) {
// TODO Auto-generated method stub
return null;
}
#Override
public ASN1OctetString toLDAPValue(AttributeDescriptor descriptor, SCIMAttributeValue value) {
// TODO Auto-generated method stub
return null;
}
#Override
public SCIMAttributeValue toSCIMValue(AttributeDescriptor descriptor, ByteString value) {
return SCIMAttributeValue.createStringValue("http://localhost:4567/photo/" + value.stringValue());
}
I then referenced the class in the SCIM resources.xml passing the uid as the LDAP attribute:
<attribute name="photos" schema="urn:scim:schemas:core:1.0"
readOnly="false" required="false">
<description>URL of photos of the User</description>
<simpleMultiValued childName="photo" dataType="string">
<canonicalValue name="photoUrl">
<subMapping name="value" ldapAttribute="uid"
transform="com.example.scim.PhotoTransform">
</subMapping>
</canonicalValue>
<canonicalValue name="thumbnail"/>
</simpleMultiValued>
</attribute>
and a SCIM query (against the reference implementation)
curl 'http://localhost:8080/Users?filter=userName%20eq%20%22jsmith%22' -u bjensen:password
now returns:
{
"totalResults" : 1,
"itemsPerPage" : 1,
"startIndex" : 1,
"schemas" : ["urn:scim:schemas:core:1.0", "urn:scim:schemas:extension:enterprise:1.0"],
"Resources" : [{
"name" : {
"formatted" : "Mr. John Smith",
"familyName" : "Smith",
"givenName" : "John"
},
"phoneNumbers" : [{
"value" : "tel:555-555-1256",
"type" : "work"
}
],
"userName" : "jsmith",
"emails" : [{
"value" : "jsmith#example.com",
"type" : "work"
}
],
"photos" : [{
"value" : "http://localhost:4567/photo/jsmith",
"type" : "photoUrl"
}
],
"id" : "fb4134dc-0a93-476a-964a-c29847f3bf79",
"meta" : {
"created" : "2015-09-09T00:17:12.768Z",
"lastModified" : "2015-09-09T00:17:12.768Z",
"location" : "http://localhost:8080/v1/Users/fb4134dc-0a93-476a-964a-c29847f3bf79",
"version" : "\"20150909001712.768Z\""
}
}]
}

Related

Using JIRA REST-API to update a custom field

I've been trying to set up a new custom webhook in Zapier, that automatically updates a custom field in JIRA, whenever a specific action occurs. I've followed some tutorials on how to do it, but when I sent the PUT request, it didn't work. I also tested a bunch in postman, but with similar results.
I used this URL:
https://bitsandbirds.atlassian.net/rest/api/3/issue/CYBIRD-1252
Here is my input:
{
"update" : {
"customfield_10051" : "test"
}
}
This is what I got back:
{
"errorMessages": [
"Can not deserialize instance of java.util.ArrayList out of VALUE_STRING token\n at [Source: org.apache.catalina.connector.CoyoteInputStream#498ac517; line: 3, column: 8] (through reference chain: com.atlassian.jira.rest.v2.issue.IssueUpdateBean[\"update\"])"
]
}
Anyone know where I messed up & how to do it right?
fyi here is the view in Zapier
There should be
{
"fields": {
"customfield_10051" : "test"
}
}

How to pass a file to an API from Azure Logic App

I have a simple Logic App. The trigger is on New file (ex: Dropbox, OneNote, etc.)
I want to pass the filename and the fileContent to a API APP (web Api).
But I got error, or worse the content is null once in the API!
The API is in C#.
How do you pass a file (ex: pdf, png) to and API from Logic App
UPDATE:
In Logic App here my action code:
"UploadNewFile": {
"inputs": {
"method": "post",
"queries": {
"filedata": {
"fileName":"#triggerOutputs()['headers']['x-ms-file-name']",
"fileContent":"#base64(triggerBody())"
}
},
"uri": "https://smartuseconnapiapp.azurewebsites.net/api/UploadNewFile"
},
"metadata": {
"apiDefinitionUrl": "https://smartuseconnapiapp.azurewebsites.net/swagger/docs/v1",
"swaggerSource": "website"
},
"runAfter": {},
"type": "Http"
}
In my API App, If the function is declared like this filedata is null
[Route("api/UploadNewFile")]
[HttpPost]
public HttpStatusCode UploadNewFile([FromBody] string filedata)
And if I don't add the [FromBody] like that I got an error.
[Route("api/UploadNewFile")]
[HttpPost]
public HttpStatusCode UploadNewFile(string filedata)
Yes you can send binary content to your own API in a few different methods. Our out-of-the-box actions use this as well.
If you want to send the binary contents as the request body
For example, an outgoing request from the Logic App could have binary content and a content-type header of image/png
In this case the swagger for the body of your request should be binary - or:
{ "name": "body",
"In": "body",
"Schema": {
"Type":"string",
"Format": "binary"
} ... }
That should tell the designer that the request body is binary content. If a previous action or the trigger had binary data (e.g. "When a file is added to FTP") and you used that output in the body, it would show up in your custom API inputs as:
"Body": "#triggerBody()"
Notice there are no { and } on the expression (string interpolations) which would convert the binary content to a string. This would send an outgoing request to the connector with the binary content - so your controller just needs to accept binary content and honor the content-type header as needed.
If you want to send binary content in a JSON request
Sometimes you don't want to send binary as the full request, but as a property of another object. For instance a person object may have a name, address, and profile pic all in the request body. In this case we recommend sending the binary content as base64 encdoded. You can use "format": "base64" in swagger to describe a property as this. In code-view would look something like:
"Body": {
"Name": "#triggerBody()['Name']",
"ProfilePic": "#base64(body('Dropbox'))"
}
Hope that helps. Let me know if you have any questions - here is an article on how logic apps preserves content-types as well.
I found how to to it.
I needed to pass the filename in the querystring and the file in the body of the HTTP Request. Today, it's not possible to do it using the design view so you need to go in code view.
In the Logic App code
"queries": {
"fileName": "#{triggerOutputs()['headers']['x-ms-file-name']}"
},
"body": "#triggerBody()"
In the API App code
public HttpResponseMessage UploadNewFile([FromUri] string fileName)
{
var filebytes = Request.Content.ReadAsByteArrayAsync();
[...]
}
A more detailed explanation can be found in this blog post:
http://www.frankysnotes.com/2017/03/passing-file-from-azure-logic-app-to.html

apache nutch to index to solr via REST

newbie in apache nutch - writing a client to use it via REST.
succeed in all the steps (INJECT, FETCH...) - in the last step - when trying to index to solr - it fails to pass the parameter.
The Request (I formatted it in some website)
{
"args": {
"batch": "1463743197862",
"crawlId": "sample-crawl-01",
"solr.server.url": "http:\/\/x.x.x.x:8081\/solr\/"
},
"confId": "default",
"type": "INDEX",
"crawlId": "sample-crawl-01"
}
The Nutch logs:
java.lang.Exception: java.lang.RuntimeException: Missing SOLR URL. Should be set via -D solr.server.url
SOLRIndexWriter
solr.server.url : URL of the SOLR instance (mandatory)
solr.commit.size : buffer size when sending to SOLR (default 1000)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522)
Was that implemented? the param passing to solr plugin?
You need to create/update a configuration using the /config/create/ endpoint, with a POST request and a payload similar to:
{
"configId":"solr-config",
"force":"true",
"params":{"solr.server.url":"http://127.0.0.1:8983/solr/"}
}
In this case I'm creating a new configuration and specifying the solr.server.url parameter. You can verify this is working with a GET request to /config/solr-config (solr-config is the previously specified configId), the output should contain all the default parameters see https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4 for an example/default output. If everything worked fine in the returned JSON you should see the solr.server.url option with the desired value https://gist.github.com/jorgelbg/689b1d66d116fa55a1ee14d7193d71b4#file-nutch-solr-config-json-L464.
After this just hit the /job/create endpoint to create a new INDEX Job, the payload should be something like:
{
"type":"INDEX",
"confId":"solr-config",
"crawlId":"crawl01",
"args": {}
}
The idea is that need to you pass the configId that you created with the solr.server.url specified along with the crawlId and other args. This should return something similar to:
{
"id": "crawl01-solr-config-INDEX-1252914231",
"type": "INDEX",
"confId": "solr-config",
"args": {},
"result": null,
"state": "RUNNING",
"msg": "OK",
"crawlId": "crawl01"
}
Bottom line you need to create a new configuration with the solr.server.url setted instead of specifying it through the args key in the JSON payload.

Google Drive API insert permission only returns id

I'm using this request to grant user permission for a folder
https://www.googleapis.com/drive/v2/files/{{id}}/permissions?sendNotificationEmails=true&fields=emailAddress,id,name,role,type,value
but it returns only the id field
{ id: '0123456789876543210' } like this
How can I get all other information in response ? or,
is there any BUG in Drive REST API ?
From the documentation here https://developers.google.com/drive/v2/reference/permissions/list:
The return resource is in the following form:
{
"kind": "drive#permissionList",
"etag": etag,
"selfLink": string,
"items": [
permissions Resource
]
}
Which means you would need to specify your query field param as
items(emailAddress,id,name,role,type,value)
Rather than:
emailAddress,id,name,role,type,value
Alternatively, you can leave the fields param out to ensure you actually have the information available. Hope that helps!

auth tokens, local storage and meteor

We are running a web application (shiny-server, where coding is done in R) and want to add an authentication layer to it.
Rather than building something to do this in R, I thought of using meteor to create auth tokens and all that.
This is the way i was thinking of doing it:
A user logs in with meteor and meteor creates a database entry that looks something like this:
{ "createdAt" : 1372521823708,
"_id" : "HSdbPBuYy5wW6FBPL",
"services" : { "password" : { "srp" : { "identity" : "vKpxEzXboBaQsWYyJ",
"salt" : "KRt5HrziG6RDnWN8o",
"verifier" : "8d4b6a5edd21ce710bd08c6affb6fec29a664fbf1f42823d5cb8cbd272cb9b2b3d5faa681948bc955353890f645b940ecdcc9376e88bc3dae77042d14901b5d22abd00d37a2022c32d925bbf839f65e4eb3a006354b918d5c8eadd2216cc2dbe0ce12e0ad90a383636a1327a91db72cf96cd4e672f68544eaea9591f6ed102e1" } },
"resume" : { "loginTokens" : [
{ "token" : "t9Dxkp4ANsYKuAQav",
"when" : 1372521823708 } ] } },
"emails" : [
{ "address" : "example#example.com",
"verified" : false } ] }
The user is redirected to the "old application". Here we check local storage (should be the same local storage as meteor if we use the same outward facing host and port, correct?)
and find this information:
Meteor.loginToken: t9Dxkp4ANsYKuAQav
Meteor.userId: HSdbPBuYy5wW6FBPL
The local storage data is investigated by "the other application" and it does a simple database query against the meteor db to verify that the local storage information matches what is in the database. Perhaps also check some kind of expiration date. If this matches, the application renders, otherwise it doesn't.
Is this a decently safe way to do it? Will it work to share local storage between the applications?
Of course you'll have to make sure that your WebSockets are running over TLS. LocalStorage uses a simple Same-origin Policy. So yes it will work. LocalStorage is as secure as a cookie so that's ok.
TLDR:
Yes and Yes