As we know, HAWQ can register in YARN to get resource by YARN. We can specify the resource used by HAWQ segment nodes. Does YARN manages HAWQ master node?
No, HAWQ master is not managed by Yarn.
For most systems, master is not managed by yarn.
Best practice: Don't install NodeManager of Yarn(to supply containers to other application such as SPARK) on the HAWQ Master Node.
HAWQ master node is not controlled by Resource Manager. SinceYarn manages all the nodes where installs Yarn Node Manager we do not recommend to install Yarn node manager on HAWQ master node, so that Yarn doesn't control HAWQ master.
HAWQ master won't be managed by yarn. We don't install nodemanager on the HAWQ master host.
Usually we deploy a cluster like this : hawq segment and yarn nodemanager on the same host, resourcemanager / namenode / snamenode / hawq master / hawq standby on the separate host.
Related
It happened after restart of a node in cluster. It complains about incompatible_feature_flags and stops. The doc says that once a feature flag enabled it is impossible to disable. The only other running node in cluster has that flag (user_limit) disabled and once this newly started node completes syncing tables from peer it says in the log that
Application mnesia exited with reason: stopped
BOOT FAILED
===========
Error during startup: {error,
{incompatible_feature_flags,
{not_active,
"All replicas on diskfull nodes are not active yet",
rabbit_user,
[rabbit#rabbitmq3]}}}
I also tried by killing all process relating to rabbit server (including erlang one) and editing rabbit#rabbitmq1-feature_flags before start, but it gets overridden and no success.
I prefer not to enable user_limit feature flag on the running node and remove it on this node whatever it takes. How can I reset this node (for example by removing mnesia directory or else) to forget about its already enabled flag and then join it to the cluster again.
PS: rabbit#rabbitmq3 is also another node in cluster that is down and causing no harm.
I do not know about other circumstances but in my case the culprit was the other down node (rabbit#rabbitmq3). I dont know how but although the rabbit#rabbitmq3-feature_flags said that user_limit is not enabled, after I ran rabbitmqctl forget_cluster_node rabbit#rabbitmq3 on the running node and start the other node it went successfully and became up and cluster is ok too.
If you are using brew run rabbit and don't care about your state, then run the following commands:
brew services stop rabbitmq
brew uninstall rabbitmq
rm -rf /usr/local/var/lib/rabbitmq
rm -rf /usr/local/var/log/rabbitmq
rm -rf /usr/local/etc/rabbitmq
brew install rabbitmq
brew services start rabbitmq
[root#internalcrm ~]# yum update
Loaded plugins: etckeeper, fastestmirror, merge-conf
Repository 'remi-php74' is missing name in configuration, using id
Repository 'remi-php80' is missing name in configuration, using id
Loading mirror speeds from cached hostfile
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
Contact the upstream for the repository and get them to fix the problem.
Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
Run the command with the repository temporarily disabled
yum --disablerepo= ...
Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: remi-php74
I need to install apollo-engine for my cloud instance. This repository has dependencies:
"optionalDependencies": {
"apollo-engine-binary-darwin": "0.2018.4-86-gf35bdc892",
"apollo-engine-binary-linux": "0.2018.4-86-gf35bdc892",
"apollo-engine-binary-windows": "0.2018.4-86-gf35bdc892"
}
Those dependencies are super slow to install on my instance. Is there some way to redirect those repos to a disk location, or track them in my version control and do something like yarn install --<option to exclude apollo-engine-binary-*>
If using Docker or any other containerization system, you should be able to use a custom image providing yarn with those packages preloaded as cache.
I usually work with npm, recently I was being suggested yarn is a better alternative, as it can be used to cache the node_modules locally and save network bandwidth.
yarn is not working when I'm off network. can any one help me out?
Yarn by default works in online mode, unless it is launched using its offline switch.
yarn install –offline
However, the above command works only when yarn is preconfigured with a offline cache. the below commands will help to get it done.
yarn config set yarn-offline-mirror ./npm-packages-offline-cache
yarn config set yarn-offline-mirror-pruning true
for a detailed reading, please go through this blog, https://yarnpkg.com/blog/2016/11/24/offline-mirror/
I want to "build" my npm build and create a docker image with it. That means I need a docker image that is able to a) run npm and b) run docker.
Currently I struggle in finding / creating such a docker image. How can I solve my problem?
Thanks!
Edit:
I managed to have a combined container, but my build is not able to find a running docker instance:
Post http:///var/run/docker.sock/v1.20/build?cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&t=registry.gitlab.com%2Ftss-repocar%2Fapp&ulimits=null: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
Post http:///var/run/docker.sock/v1.20/images/registry.gitlab.com/tss-repocar/app/push?tag=: dial unix /var/run/docker.sock: no such file or directory.
* Are you trying to connect to a TLS-enabled daemon without TLS?
* Is your docker daemon up and running?
Use Packer. You run commands in Packer to do all of your NPM setup, and it will spit out a Docker container.
Here is packer
https://www.packer.io/docs/
And then I found this
https://www.npmjs.com/package/node-packer
To build docker images your build container must have access to /var/run/docker.sock (or you have to use Docker in Docker).
Assuming you have your gitlab-ci-multi-runner in a Docker Container itself, change /etc/gitlab-runner/config.toml to look like this:
volumes = ["/var/run/docker.sock:/var/run/docker.sock", "/cache"]