I'm adding a new custom service to Ambari.
I have successfully created the service and install it in the Ambari web UI. After starting the master component of my new service, Ambari claims that the master is in stop status, however, the master has been run successfully on the intended node and I can use its API.
I wonder how Ambari checks a component status?
Does it use the status function which I have provided in the component definition? I don't see logs of calling my status function in the Ambari logs.
Or does it use the PID file? My component does not have a PID file.
#TailofGodzilla (cool name btw), When I make custom services, I start with existing open source examples, and then finally create management packs. You can easily reverse engineer these, including the service status function.
I checked 3 of these services (Hue, Elk, NiFi) and all are using PID File with entries for status function and status_params file.
Related
I have Jitterbit Cloud Data Loader on a server and it was working fine for years. After the last update I cannot open anymore the UI console and cannot uninstall/reinstall the service "Jitterbit Cloud Data Loader Apache Server". When I try to uninstall it shows The system cannot find the file specified. : AH00436: No installed service named "Jitterbit Cloud Data Loader Apache Server" and when I try to install it says: The name is already in use as either a service name or a service display name. AH00370: Failed to create the "Jitterbit Cloud Data Loader Apache Server" service.
Does anyone has seen this problem before?
Tried to install and reinstall, but nothing so far, I don't want to lose my info/configuration of the processes so far...
I'm running a task with SimpleHTTPOperator on Airflow Composer. This task calls an API that runs on Cloud Run Service living in another project. This means I need a service account in order to access the project.
When I try to make a call to the api, I get the following error :
{secret_manager_client.py:88} ERROR - Google Cloud API Call Error (PermissionDenied): No access for Secret ID airflow-connections-call_to_api.
Did you add 'secretmanager.versions.access' permission?
What's a solution to such an issue ?
Context : Cloud Composer and Cloud Run live in 2 different Projects
This specific error is irrelevant to the cross project scenario. It seems that you have configured Composer/Airflow to use Secret Manager as the primary backend for connections and variables. However, according to the error message , the service account used by Composer is missing the secretmanager.versions.access permission to access the connection (call_to_api) you have configured for the API.
Check this part of the documentation.
I setup a flink cluster on yarn, and submit job by type commands related on hosts successfully.
but it is not so convenient as web ui(i have tested to submit job by web ui on fink standlone cluster).
when i click "Submit new Job" buttons , the page is as follow:
I click "here" hyperlink, it jumped to a page with a random host ip in cluster and "random" port. as we do not open all port to public network, so this page is connection refused.
I try to debug js code to find whether some config trigger this problem, and find two code fragments:
It seems like this page must not function well with flink on yarn.
So, can i submit job to flink on yarn by web ui? and how?
As the message states YARN proxy, which you are seeing does not allow file uploads. If you really wanna upload jobs via web-ui on yarn, you can find out the real ip of the jobmanager and go to that ip directly(without yarn proxy).
There are some issues though with the approach. You have to have access to that node, which is usually not the case on yarn. (Which most probably you are hitting).
Flink on yarn has two modes: session and per-job. If you need to submit via web-ui, you must first create a yarn-session and enter session web-ui to submit, but per-job should not be submitted via web-ui Submitted
I have a .NetCore stateless WebAPI service running inside Service Fabric local cluster.
return Endpoint.Start(endpointConfiguration).GetAwaiter().GetResult();
When I'm trying to start NServiceBus endpoint, I'm getting this exception :
Access to the path 'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics' is denied.
How can it be solved ? VS is running under administrator.
The issue you are having is because the folder you are trying to write to is not supposed to be written by your application.
The package folder is used to store you application binaries and can be recreated dynamically whenever an application is hosted in the node.
Also, the binaries are reused by multiple service instances running on same node, so it might compete to use the files by different instances.
You should instead instruct your application to write to the WorkFolder,
public Stateless1(StatelessServiceContext context): base(context)
{
string workdir = context.CodePackageActivationContext.WorkDirectory;
}
The code above will give you a path like this:
'C:\SfDevCluster\Data_App_Node_0\AppType_App10\App.APIPkg.Code.1.0.0.diagnostics\work'
This folder is dynamic, will change depending on the node or instance of your application is running, when created, your application should already have permission to write to it.
For more info, see:
how-do-i-get-files-into-the-work-directory-of-a-stateless-service?forum=AzureServiceFabric
Open folder properties Security tab
Select ServiceFabricAllowedUsers
Add Write permission
we'd like to set up a notification engine that uses AMQP. To achieve this, we're using RabbitMQ. That's fine, the server is installed and configured.
Now, we'd like to access the RabbitMQ message queues from a browser, so we need to have a wrapper around AMQP. For this, we found deepstream.io. This is especially fine, because we use Polymer as frontend which is supported by deepstream.io.
We configured deepstream.io to use rabbitMQ as backend, but the connection from Polymer to deepstream.io does not work:
The sets up the connection, we can see this in the deepstream server log (INCOMING_CONNECTION), but the component seems to be the problem. After a long timeout the log file reports a CONNECTION_AUTHENTICATION_TIMEOUT.
How can I set the user name and passwort specified in the deepstream.io config file in the component?
Thank you!
According to the ds-tutorial-polymer repo you connect to deepstream as follows:
<ds-connection
url="localhost:6020"
ds={{ds}}>
</ds-connection>
<template is="dom-if" if="[[ds]]">
<ds-login
auto-login
ds="[[ds]]">
</ds-login>
<todos-list
name="polymer_example/todos"
ds="[[ds]]">
</todos-list>
</template>
This exposes deepstream as a global ds for you to pass to other records and lists.
If you switch off auto-login within the ds-login you will need to call the login method on the prototype. An example ( and rest of documentation ) can been seen here:
http://deepstreamio.github.io/deepstream.io-tools-polymer/components/deepstream.io-tools-polymer/#ds-login