I could not find any authorization GET parameter in the influxdb docs. ( https://docs.influxdata.com/influxdb/v1.5/guides/writing_data/ )
I see no way of specifying user name and password. I cannot disable authentication because the influxdb server is a live server that is public and available on the internet. I'm not able to move this server to a local machine, because I want (actually I must) run jmeter in paralel from many machines, and send all results to the same database for later aggregation.
Maybe I could setup an ssh tunnel, but that is too bad - I need to run jmeter on many machines, and it would really be painful to setup new ssh tunnels + disable influxdb auth + add firewall rules etc. during the test, then restore everything to normal, then do it again when I need to test again...
As per this doc try setting url to:
http://127.0.0.1:8086/write?u=todd&p=influxdb4ever
Related
I have a CentOS VM with an ready installed Pingaccess Server Testenvironment with access to the Pingaccess Admin UI.
Now I would need to set up an Agent-Authentification on the system but sadly have no experience configuring Pingaccess sofar. I also find it dificult to find documentation to complete my task.
I would appreciate any hints and pointers in right direction or information on how this kind of setup can be configured and what else I might need? Is it even possible to set it up in a local VM?
Here a slightly more detailed description of the scenario:
An application that itself is not able to use a corresponding protocol (Oauth, SAML2, ...) (e.g. a small PHP script or something similar) that cannot do anything other than output a user name that it reads from the HTTP headers.
Set up an Agent that extends the header attributes and e.g. something like Header-UserName. The application can then access the web server variables and use these values without having to worry about how the authentication works. The agent, on the other hand, can do the protocols and handle authentication via the server (here PingAccess).
Thanks a lot in advance.
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.
We in our team are planning to use gerrit. So, to get introduced, I did set up a server, used open-id for authentication and created some test-users and test-projects in it.
Now we are ready to use it. But we actually prefer LDAP for real use.
So, can I change my authentication system from open-id from LDAP? What will happen to current users then?
I want to clear test projects and changes. How can I do them?
Can I complete delete existing gerrit setup and initiate a fresh setup in same machine? (I tried extracting the jar in different folder, but I faced some problems in it)
I am using Ubuntu 12.04 as my server.
Please help.
Delete the database (you're not using the H2 database anymore, but some MySQL or PostgreSQL server, don't you?) plus the directory where Gerrit is running (the -d parameter, see docs). Additionally, remove the git repos, if you configured them to be located on a different path.
Then all your data is gone and you can start from scratch.
What I would do is using Putty (or other solution) on Windows to connect to a SAN switch and get results from a command with ssh.
I use Powershell as scripting language and it could be done easily but i don't want to save the password in the script.
I'm looking for a solution to use Putty from command line and set the password not stored in clear in the script.
What I thought is to launch the script with \RUNAS (through a Scheduled task) and pass the actual credentials directly to Putty. (The switch would have the same password as the account used with the Runas). Is that possible?
Or is there any solution using putty with a certificate or something like this?
You may want to consider using key authentication as opposed to a password.
People will say use a password in addition to the key, but if your alternative is storing the password on your PC in a file anyway, someone with access to your machine owns you in either case.. So you just need to generate the keys. The requirement is: no-one but you has access to that key file.
http://www.linuxproblem.org/art_9.html
I'm in the same boat, have to use Windows, but for me www.mingw.org which gives you a shell, and the basic *nix tools - extremely useful for SSH, connect to remote Linux VPS, etc.. Cygwin, of course which is similar, and has an easier tool (setup.exe if I recall) to install new apps. I actually use git-bash with is mingw with git. No-GUIs. I've found this easy to just drop to the mingw shell when I need to use ssh openssl cut awk etc..
So running any remote command using SSH from the command line without third-party programs like Putty, or those with GUIs, etc.. Using the key authentication and offing password auth completely in ssh on the remote device (at least on devices where you have control) is some additional lockdown for the remote device, especially if you're the only one need access it.
Which leaves, scheduling the script. There should be a way to do that via batch file and Windows or within the command line environment.
I'll suggest following options:
use password authentication. Store the text file with password in a file with limited access (some service account) and launch your script under this account's credentials
same as above, but instead of text file use certificate file
write a small program (C#) which uses DPAPI to store the certificate or password in service account-specific store.
combine any of the above with the use of BitLocker/EFS
No options are can protect you from an attacker having admin access to the server, but implementing them will give an increasing (in order of number) headache to someone who will be trying to break it.
The script will be a weak spot in any case, though.
This is probably not the answer you're looking for, but I wouldn't use Putty for this, and would rather communicate with the SSH server directly using SSH.NET library. It's available in both source and binary form, and you could use it from PowerShell too if you like.
Examples: http://sshnet.codeplex.com/wikipage?title=Draft%20for%20Documentation%20page.
Then you'd have a lot of options to store your login credentials securely.
I recommend setting up 2-factor authentication on the ssh machine that you have to communicate with IF you can't use key authentication.
Google's 2 factor authentication can be implemented for ssh and is relatively easy to set up as long as SE linux is disabled...if it isn't disabled, you can add an exception and that would essentially help reduce the risk of compromise and increase security.
I have my Hudson CI server setup. I have a CVS repo that I can only checkout stuff via ssh. But I see no way to convince Hudson to check out via ssh. I tried all sorts of options when supplying my connection string.
Has anyone done this? I gotta think it has been done.
If I still remember CVS, I thought you have to set CVS_RSH environment variable to ssh. I suspect you need to set this so that your Tomcat process gets this value inherited.
You can check Hudson system information to see exactly what environment variables the JVM is seeing (and passes along to the build.)
I wrote up an article that tackles this you can find it here:
http://www.openscope.net/2011/01/03/configure-ssh-authorized-keys-for-cvs-access/
Essentially you want to set up passphraseless ssh keys for your build user. This will allow authentication to occur without the need to work out some kind of way to key in your password.
<edit> i.e. Essentially the standard .ssh key client & server install/exchange.
http://en.wikipedia.org/wiki/Secure_Shell#Key_management
for the jenkins user account:
install user key (public & private part) in ~/.ssh (generate it fresh or use existing user key)
on cvs server:
install user key (public part) in ~/.ssh
add to authorized_keys
back on jenkins user account:
access cvs from command-line as jenkins user and accept remote host key (to known_hosts)
* note any time remote server changes key/ip you will need to manually access cvs and accept key again *
</edit>
There's another way to do it but you have to manually log from the build machine to your cvs server and keep the ssh session open so hudson/jenkins can piggyback the connection. Seemed kinda pointless to me though since you want your CI server to be as hands off as possible.