Unable to create labels for WebGL Feature Layer, esriGeometryPolyline is not supported - arcgis

I publish service from arcgis server and I am using it with the arcgis js 4.9. But one of Feature Layer I faced error :
[esri.views.2d.engine.webgl.WGLMeshFactory] ,
mapview-labeling:unsupported-geometry-type,
Unable to create labels for WebGL Feature Layer, esriGeometryPolyline is not supported"
Now I can not show Label of layer. How can I solve it?

Looking at the 4.9 source code, the esriGeometryPolyline geometry type have "null" for value of possible label placements, and this will give you the error mentioned.
You could try to upgrade to 4.11, which has the "center-along" value available for esriGeometryPolyline.
I am currently researching an error where a user has gone from 4.4 to 4.11 and has gotten various display problems, which are probably related to some setting of labels in the web map and label placement for polylines.

Related

Running automatic annotation in cvat with tensorflow results in status code 400 "No labels found for tf annotation"

I'm trying to run a tensorflow for pre-annotation in cvat.
I can start the docker container, option in the menue shows up.
However, after selection of the model i get the error:
Could not infer model for the task 10
Error: Request failed with status code 400. "No labels found for tf annotation".
It seems that i need to specify some labels, in which format do i have to configure them though ?
Documentation seems sparse on this one. Maybe someone on here knows something?
Also if some stackoverflow wizzard with a lot of reputation could create the tag cvat i would be very happy :)

Express-browserify and Watson Visual Recognition - TypeError: fs.existsSync is not a function

I'm trying to get the Watson Visual Recognition to run client side by using express-browserify with reference to the node-sdk for watson-developer-cloud. The VisualRecognitionV3 makes use of the fs package hence I get the fs.existsSync error when I'm trying to call it from the client-side as the browser doesn't know which filesystem to use. My question is how do I go about creating a so called 'abstraction layer' as I am restricted to using the express-browserify package for cross origin calls.
This thread is pretty helpful in shedding some light but I'm not sure where to start regarding the 'abstraction layer' or if there are any other solutions. Also, would something like socket.io work for this? I've linked a clone of the directory here as it seems less clunky than pasting the multiple portions below.
The repository can be cloned and just requires a personal iam_apikey with relevant launch configuration. Appreciate any pointers. Thanks!
I didn't manage to sort this out with express-browserify due to the require(fs) from browser issue but I was able to get it running using the express-ws package

Sony A7Sii not showing expected API features

I'm trying to control a Sony a7SII via the pysony Sony Camera API wrapper.
https://github.com/Bloodevil/sony_camera_api
I'm able to connect to the camera, start the session, but I'm not getting the expected list of possible API functions back.
My process looks like this:
import pysony
api = pysony.SonyAPI()
api.startRecMode()
api.getAvailableApiList()
{'id': 1, 'result': [['getVersions', 'getMethodTypes', 'getApplicationInfo', 'getAvailableApiList', 'getEvent', 'actTakePicture', 'stopRecMode', 'startLiveview', 'stopLiveview', 'awaitTakePicture', 'getSupportedSelfTimer', 'setExposureCompensation', 'getExposureCompensation', 'getAvailableExposureCompensation', 'getSupportedExposureCompensation', 'setShootMode', 'getShootMode', 'getAvailableShootMode', 'getSupportedShootMode', 'getSupportedFlashMode']]}
As you can see, the returned list does not contain a full set of controls.
Specifically, I want to be able to set the shutter speed and aperture. Which based on this matrix https://developer.sony.com/develop/cameras/ I should be able to do.
Any ideas would be much appreciated.
Turns out, both pysony and the API are working fine.
You must install the Remote App from the store rather than relying on the "embedded" remote that ships with the camera to get full API functionality.
Also as a note; it seems to take a little time for the 'api.startRecMode()' to actually update the available API list. Sensible to add a little delay to your code.
See:
src/example/dump_camera_capabilities.py

MuleDevKit - Build a Transformer and access it in the palette

I'am trying to build a custom transformer with Muledevkit. I am successful in building one and I can install it in my studio. But I couldn't figure out a way how to allow the end-user to set a property (like how its shown in custom-transformer under the 'Transformer Settings' -> 'Property') to access inside my Transformer class.
Regards,
Raj
Unfortunately this is a feature not covered by the DevKit transformers.
Because DevKit transformers can be implicitly called (Mule out of the box mechanism to resolve transformers), there are some constrains we have to follow.
The only work around I can think of is to create a processor that does the transformation for you.
I'll add this as a feature request on our backlog.
HTH

Face Detection not working on Nexus7 4.2

I'm building an android camera app, and using the FaceDetectionListener. My app has no problems on XperiaZ - LgoptimusBlack- GalaxyNexus4 and some other devices. But with the Google Nexus 7 it gave's me an error:
java.lang.IllegalArgumentException: invalid face detection type=0
When i call
params.getMaxNumDetectedFaces()
it gave's me 0 that means that my camera hardware can recognize 0 faces during the preview with the FaceDetectionListener. I've tried to use the FaceUnlocker of the Nexus7, and it works perfectly , that means that it's not a camera hardware. I googled about it and read the same problem without any answer. I've tried some internet samples with the facedetection, but it's the same problem on the Nexus7!!!
It's unfortunately a Platform issue. Adding try{ }catch( ){ } blocks won't work. Certain devices may simply require a SW upgrade to get fixes.
In the meantime, for those devices not having platform fixes yet you may want to wrap the FaceDetector API: http://developer.android.com/reference/android/media/FaceDetector.html
(Bitmap based YES but can work out the task to identify the location of a face though).