I use typical stack YARN/Ranger with atomic policies for accessing YARN queues. Having Hadoop user access, how to get list of queues that user has access to? I can see how it's usually done from admin side, but what about user? I went through yarn APIs, but found nothing. Ranger - user usually doesn't have enough permissions to get more details about itself. Is the only way to do it is to bruteforce all queues in cluster until u find accessible one?
Unfortunately, the user queue policy is not visible through the REST APIs for Fair Scheduler. You can double-check by running:
curl RM-ADDR:PORT/ws/v1/cluster/scheduler
but looking at ResourceManager REST API’s:Cluster Scheduler API I think you're out of luck.
If you use Ambari or Cloudera Manager, those might have APIs that will allow you to download the Fair Scheduler's XML file.
Related
is there an easy way to audit/log logging action done via RabbitMQ management?
for instance, a user entered the RabbitMQ management and published a message through it, I would like to be able to know who published the message and when.
Assume I'm using LDAP for authentication.
thanks!
I am having one namespace and one deployment(replica set), My Apache logs should be written outside the pod, how is it possible in Kubernetes.
This is a Community Wiki answer so feel free to edit it and add any additional details you consider important.
You should specify more precisely what you exactly mean by outside the pod, but as David Maze have already suggested in his comment, take a closer look at Logging Architecture section in the official kubernetes documentation.
Depending on what you mean by "outside the Pod", different solution may be the most optimal in your case.
As you can read there:
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes
cluster ... Cluster-level logging architectures are described in assumption that a logging backend is present inside or outside of your cluster.
Here are mentioned 3 most popular cluster-level logging architectures:
Use a node-level logging agent that runs on every node.
Include a dedicated sidecar container for logging in an application pod.
Push logs directly to a backend from within an application.
Second solution is widely used. Unlike the third one where the logs pushing needs to be handled by your application container, sidecar approach is application independend, which makes it much more flexible solution.
So that the matter was not so simple, it can be implemented in two different ways:
Streaming sidecar container
Sidecar container with a logging agent
I'm setting up a new RabbitMQ service in iAPC (Swisscom app cloud) and I need to control the user access of the different producer/consumer application.
My access control requirement:
Application A can only write to queue X.
Application B can only read from queue X.
RabbitMQ provides usually user management functionalities. However, the whole user management in the admin section, RabbitMQ management GUI, is not available.
What solution does exist in iAPC to manage read/write permissions for different applications which have an app binding?
Is it even possible to setup different users?
I believe there is no way to add additional users in these managed RabbitMQ service deployments provided by Swisscom. This is quite similar across all of the available shared services (e.g. ElasticSearch or MariaDB) which come with a preset of defined users. I assume that this is true because those are actually shared services (as opposed to dedicated ones), where there may be authentication / security concerns if you are allowed to administer existing users.
For anyone who is interested the way to access your RabbitMQ CloudFoundry service admin interface via the provided environment parameters to see what is possible:
bind your RabbitMQ service to a running app instance (e.g. MY-APP)
look at the environment of that app with cf env MY-APP
tunnel the RabbitMQ management port to your localhost:
cf ssh -N -T -L 15000:rabbitmq.service.consul:15672 MY-APP
open a webbrowser and look at http://localhost:15000
Use the Username and Password you found in step (2) under rabbitmqent > credentials > management to log in
I have a running Kubernetes cluster on Google Cloud Platform.
I want to deploy a postgres image to my cluster.
When selecting the image and my cluster, I get the error:
insufficient OAuth scope
I have been reading about it for a few hours now and couldn't get it to work.
I managed to set the scope of the vm to allow APIs:
Cloud API access scopes
Allow full access to all Cloud APIs
But from the GKE cluster details, I see that everything is disabled except the stackdriver.
Why is it so difficult to deploy an image or to change the scope?
How can I modify the cluster permissions without deleting and recreating it?
Easiest way is to delete and recreate the cluster because there is no direct way to modify the scopes of a cluster. However, there is a workaround. Create a new node pool with the correct scopes and make sure to delete any of the old node pools. The cluster scopes will change to reflect the new node pool.
More details found on this post
We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.