Backend is not available in the list of defined system mappings in Cloud connector - hana

While creating destination to my hana system. I face the error 'Backend is not available in the list of defined system mappings in Cloud connector' in the connection test.
The connection in cloud connector is reachable. I read through blogs, the virtual and internal address doesn't mismatch. the url doesn't have any '_', but still I face the error. Any leads to this kind of issue?
Thanks

Issue is resolved by deleting the location id in the destination settings

Related

Intermittent HTTP error when loading files from ADLS Gen2 in Azure Databricks

I am getting an intermittent HTTP error when I try to load the contents of files in Azure Databricks from ADLS Gen2. The storage account has been mounted using a service principal associated with Databricks and has been given Storage Blob Data Contributor access through RBAC on the data lake storage account. A sample statement to load is
df = spark.read.format("orc").load("dbfs:/mnt/{storageaccount}/{filesystem}/{filename}")
The error message I get is:
Py4JJavaError: An error occurred while calling o214.load. : java.io.IOException: GET https://{storageaccount}.dfs.core.windows.net/{filesystem}/{filename}?timeout=90``` StatusCode=412 StatusDescription=The condition specified using HTTP conditional header(s) is not met.
ErrorCode=ConditionNotMet ErrorMessage=The condition specified using HTTP conditional header(s) is not met.
RequestId:51fbfff7-d01f-002b-49aa-4c89d5000000
Time:2019-08-06T22:55:14.5585584Z
This error is not with all the files in the filesystem. I can load most of the files. The error is just with some of the files. Not sure what the issue is here.
This has been resolved now. The underlying issue was due to a change at Microsoft end. This is the RCA I got from Microsoft Support:
There was a storage configuration that is turned on incorrectly during the latest storage tenant upgrade. This type of error would only show up for the namespace enabled account on the latest upgraded tenant. The mitigation for this issue is to turn off the configuration on the specific tenant, and we had kicked off the super sonic configuration rollout for the all the tenants. We have since added additional Storage upgrade validation for ADLS Gen 2 to help cover this type of scenario.
I had the same problem on one file today. Downloading the file, deleting it from storage and putting it back solved the problem.
Tried to rename file -> didn't work.
Edit: we have it on more files, random.
We worked around the problem by copying the entire folder to a new folder and rename it to original. Jobs run without problems again.
Still the question remains, why did the files end up in this situation?
Same issue here. After some research, it seems it was probably an If-Match eTag condition failure in the http GET request. Microsoft talk about how they will return error 412 when this happens in this post: https://azure.microsoft.com/de-de/blog/managing-concurrency-in-microsoft-azure-storage-2/
Regardless, Databricks seem to have resolved the issue on their end now.

Using 'connmanctl config' to set static IP without wired connection

I am currently using 'connmanctl config' to set static and DHCP settings with a wired connection. I'm curious if anyone has been successful with applying settings with the wire unplugged?
I would typically use 'connmanctl services' for a list of services then perform a string.match(blah, "ethernet_%w+_cable") to use that wired service name. I have been able to find and apply that service name with the ethernet cable unplugged BUT now when using 'connmanctl config':
connmanctl config ethernet_f8dc7a04ea82_cable --ipv4 manual 192.168.91.108 255.255.255.0 192.168.91.1 --nameservers 8.8.8.8
I get this error:
Error ethernet_f8dc7a04ea82_cable: Method "SetProperty" with signature "sv" on interface "net.connman.Service" doesn't exist
As you can see I have the service applied to the command and this is the same service name as when the cable is plugged in. This feature would be nice for equipment that needs to be pre-programmed before reaching the customer. I have also researched this error but can't find it being an issue with others the same as it is with my situation. Have also read many blogs, articles, etc...on trying to achieve this with nothing that jumps out at me.
...Any ideas ?
I had to perform this action via back-end with the code that I am using to configure. Just an example...settings are applied to /var/lib/connman/ethernet_?????cable/settings. I created the adapter name with the MAC address because it does not exist until the network is detected, created the directory /ethernet?????_cable then created an empty settings file on the fly. When programming and saving the settings via the equipment I am using I just insert the settings manually. When a network cable is plugged in and detected the settings you have applied work wonderfully.

QvSAPConnector.dll error - Mismatch between SAP Transport and connector versions

I am using Qlikview version 11.20.12758.0 SR10 64-bit and SAP v6.0.
After installing QV SAP connector v6.0, i am trying to connect to SAP by passing relevant details - IP, Client Id, System number, User, Pass. Facing below error message:
Mismatch between SAP Transport and connector versions. Please import the correct SAP transport. Unable to retrieve transport version from system. Invalid parameter 'RFC_FUNCTION_HANDLE' was passed to the API call
Referred following links, however, issue persists:
https://community.qlik.com/thread/99142
https://community.qlikview.com/thread/139623
https://qlikcommunity.qliktech.com/thread/142955
Consulted SAP guy to know if there is issue with Access rights, however, access is also available on the User.
I learnt that after QvSAPconnector, it should show Transports folder under path: C:\Program Files\Common Files\QlikTech\Custom Data\QvSAPConnector\
However, Transports folder is missing.
Please guide me to resolve this error. Thank you!
In order to be able to use the SAP Connectors (whether you are connecting to ERP or SAP BW), you will need to import transports provided by Qlik into your system(s) as these are needed for Qlik's connectors to function. These transports contain various "helper" programs (such as the version check as shown in your error message screenshot) as well as authorisation objects that are needed.
Depending on the connector that you are using, in the case of the "SAP" connector, you will need to import the relevant transports, and this also depends on the version of the system that you are going to be importing the transports into.
More information on which transports you should use are given here:
http://help.qlik.com/en-US/connectors/Subsystems/SAP_Connectors_Help/Content/6.4/Installation/Installing-the-transports.htm
As mentioned in the above link:
Data extraction and user profile transports are installed in the SAP
system. These are available in the following folder on the computer,
copied there during the installation of the Qlik SAP Connector:
C:\Program Files\Common Files\QlikTech\Custom Data\QvSAPConnector\Transports
I noticed that you uploaded an image of a different folder which just contains the SAP Connector logs. The transports are always installed in the above location.
Once the transports are imported, you should then be able to use the connectors to connect and extract data.

Error while getting available partitions. Apache Stratos

I'm facing this error while try to config my Stratos - Partition Deployment it is always give me this error :
Error while getting available partitions. Cause : The service cannot be found for the endpoint reference (EPR) https://x.x.x.x:9443/services/AutoscalerService/
any suggested ideas ?!
Regards,
It looks like stratos manager cannot reach Autoscaler on the given endpoint. Is it possible for you to check whether you can telnet from the stratos manager host to Autoscaler IP, 9443?
If you could mention the stratos version and the deployment model (single jvm/distributed) you are using, we could provide a much better answer.
The cause is the wrong definition of the autoscaler endpoint in the /repository/conf/cartridge-config.properties file. Check if the specified IP address is reachable from the stratos manager, and check that /etc/hosts is updated with the specified domain name in the cartridge-config.properties.

Listing and adding JMS Physical Destinations to GlassFish v3.1

After adding a destination(Queue) to Destination Resources from Admin Console at Resources/JMS Resources/Destination Resources, no physical destinations are displayed at server(Admin Server)/JMS Physical Destinations .Instead, the following error message is displayed below the heading:
An error has occured
Unable to list JMS Destinations
Also, on trying to add a new Physical Destination at server(Admin Server)/JMS Physical Destinations, of type 'Queue', following error message is displayed:
An error has occured
Unable to create JMS Destination
On trying to add a Physical Destination using asadmin in command-line as:
asadmin> create-jmsdest -T queue DemoQueue
the following error is displayed:
remote failure: Unable to create JMS Destination.
Command create-jmsdest failed.
Here, GlassFish Server Open Source Edition 3.1-b24 is run on Ubuntu with kernel 2.6.28-11-server.
Any help is appreciated.
I don't think that you should create physical destinations manually. All you need to do to set up JMS resources in GlassFish is defining a connection factory and destinations - all under Resources - JMS resources branch in admin interface. When your destinations are used physical destinations will be created automatically.
confused me to no end first time so I sympathise
For GFv2.1.1 (and I suspect for v3) a physical destination - mq.sys.dmq - is already created and configured and queues are created here. The messaging server is SunMQ and if it is your intention to use this out-of-the-box then you don't need to create another physical destination.
if you do indeed need to create another physical destination launch [path-to-glassfish]/imq/bin/imqadmin.exe (or ubuntu equiv) and do it there.