Guys
I deploy a 6 nodes ignite for spark job. it works fine. However, when i hope to use the visor to do administration, i can not see the topology.
here is the config file.
And, when I enter the visor console, and open the config file. I encountered the "Empty topology"
any suggestion? how can i see the topology? thanks
Guys
I found the root cause, I use Static IP Based Discovery, i just config the IP Address List.
And I ran the visor on one of the nodes, so the visor uses port 47501, but the config file with static ips did not contain this server:port. Finally, this visor can not join the ignite cluster, and can not communicate with them.
use this to fix.
172.16.59.141:47500..47509
I don't see anything suspicious in your config.
I want to see logs from all nodes: 6 servers nodes and Visor node.
To get logs from Visor, please, run
./bin/ignitevisor.sh -v
To get logs from server nodes see <ignite_home>/work/logs.
Could you please zip all these logs and provide link on them (on any file sharing).
Also, I don't think any reason why you need to run visor with sudo rights. Could you please run Visor without sudo.
Related
I have just downloaded and configured my first ABP solution and I'm having a performance problem.
I chose the option to have a separate site for IdentityServer. I configured a database and changed the ConnectionString entries in the appsettings.json files of the Hosts project, Migration project, and the IdentityServer project. I followed all the instructions in the getting started tutorial.
Everything (eventually) works but each time I try to authenticate myself either to the Swagger site or the Angular website, there is a significant (minutes-long) delay before I am either logged in or the request times out.
Suspected Problem:
So I read that the site uses a redis cache during login. I have never used this technology before. I had to get that installed.
I used the following commands to pull down the image and run it in Docker - another technology that I have not used before:
PS C:\WINDOWS\system32> docker pull redis
Using default tag: latest
latest: Pulling from library/redis
a330b6cecb98: Pull complete
14bfbab96d75: Pull complete
8b3e2d14a955: Pull complete
5da5e1b21a2f: Pull complete
6af3a5ca4596: Pull complete
4f9efe5b47a5: Pull complete
Digest: sha256:e595e79c05c7690f50ef0136acc9d932d65d8b2ce7915d26a68ca3fb41a7db61
Status: Downloaded newer image for redis:latest
docker.io/library/redis:latest
PS C:\WINDOWS\system32> docker run --name development9-redis -d redis
eee1a05c90e7a492a19eab025fe307b17040ba35ea2f3bc5fbd5df1bab372028
This appeared to do something, so I assume my cache is running and available. Am I missing something? Could a misconfiguration of redis be the cause of my performance problem?
Please ask me any relevant questions you'd like and I will describe my set up. Thanks.
As you've pointed out, your performance issue is probably related to the improper Redis configuration. It really helps to downgrade response time.
You need to check the Redis running on port 6379, and also check does it get requests.
You might find useful this comment if you have a question about why I need to use Redis.
(Redis can help you to share data between IdentityServer and your host application.)
"run the command docker run --nameredis-container -p 6379:6379 -d redis and change the redis connection string in your appsettings to localhost:6379."
https://github.com/abpframework/abp/issues/3487#issuecomment-611208048
I have implemented Redis cache with .net core 2.1 application. Now the issue is I have only development connection string. I want to configure and test Redis cache somehow on my local pc. I have read somewhere that it is possible using chocalatey. So can body refer me any link?
PS: When I tried to run redis cache from development server using vpn, It shown me popup to select "ResultBox.cs" file. So I created new ResultBox.cs file and give it the path, but when I call rediscache.Get() method it opens ResultBox.cs file but nothing happens then. Can anybody tell what is ResultBox.cs for?
I have found a way to configure Redis on local using chocolatey. Use this link. If you face Misconf issues while testing on redis-cli this link will be helpful.
You can run a local docker redis image. See this and this for reference.
For reasons too insane to even go into, I am attempting to install using the Bitnami Magento 1.9.2.4 image on a fresh Amazon AWS/Lightsail Ubuntu 16.04 instance (2gbs to avoid complaints and be sure I don't run into anything unnecessary).
I think this is really more of an Apache question. After I finish the install (success), I can't get the server to respond via the instance IP address at the default port (8080).
Regarding the old Bitnami Image, you can get (or wget) that Magento 1.9.2.4 image still, it's over here:
wget "https://downloads.bitnami.com/files/stacks/magento/1.9.2.4-3/bitnami-magento-1.9.2.4-3-linux-x64-installer.run"
So for the sake of anyone who's trying to work through the whole process, once you pull the above down to your instance you need to chmod the above file to 755. This assumes you are in the directory with your download:
chmod 755 bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Then run it using it's full path, like:
/home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
So the install is going to ask a bunch of questions, for anyone keeping track my answers were all yes (ie. yes to Git, PhpMyAdmin, Beetailer... whatever that is).
Then I created an admin user / password etc.
As far as the port I didn't have anything running on 8080 so the install defaulted the port to 8080 with HTTPS on 8443 with MySQL on 3306 (more on ports in a minute).
I think Host/Domain is one of the keys to this problem. When I couldn't get the server to respond I just recreated an instance and tried a different Domain during the install process. I tried: internal AWS IP, External ACTUAL IP, 127.0.0.1
Here's what the Magento 1.9 Domain prompt looks like:
So basically that sort of brings us up to date.
Once I finished the install, like a normal human used to using bitnami as a cloud image I assumed the server would respond at whatever the default path was at the IP address it was running on. Ie:
BASEIPADDRESS:8080/magento
Not the case. When I hit that the server does NOT respond, hence the question. In addition to the above I have also tried the BASEIPADDRESS, and the BASEIPADDRESS:8080
Results checking open ports
So since the server is not responding I figured I would check the ports.
First I checked using netstat:
netstat -lntu
I got back:
Then I realized that netstat is now depreciated... so I went with:
ss -lntu
I got back:
(excuse the images, formatting wouldn't work for text)
To me it looks like 8080 (default) is open in both of those results. So why isn't the server responding at the default location?
#Bitnami Status = OK
Checking the status with:
/home/ubuntu/magento-1.9.2.4-3/ctlscript.sh status
Everything looks good:
apache already running
mysql already running
Memcached not running
Since it says Memcached was not running, I started memcached to see if that was the issue, no it was not.
I can access the instance via SSH and yes I am sure the IP is right. See images above.
I also posted this to the Bitnami community but haven't heard anything over there. Will cross populate as I get ideas.
It looks to me that you configured Magento using the private IP address, so you would not be able to access from your browser. A way to check it is by executing the following command in your machine:
curl -L 127.0.0.1:8080/magento
If that provides output, then the IP is misconfigured, so you would need to reinstall using the proper IP
So this ended up being PRIMARILY due to not running the Bitnami stack installer as root / sudo:
sudo /home/ubuntu/bitnami-magento-1.9.2.4-3-linux-x64-installer.run
Why Install with Sudo on AWS/Lightsail?
So the reason you need to install as sudo has to do with the fact that when run as the normal user (ie. not root) the installer defaults to port 8080 which is NOT open on aws by default. To complicate matters further you may not be able to get things running properly even if you manually swap to port 80 AFTER you run the installer.
To avoid a scenario where port 80 requires root access to utilize I just re-created my instance and ran the installer as root with the above command.
Host Setting
During install I selected the public IP for the "Host" prompt and everything worked as I thought it might (straight out of the box).
Thanks to Javier Salermon who put me on the right track and the devs at Bitnami for cueing me into the fact that 8080 is not open by default.
Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.