I have just downloaded and configured my first ABP solution and I'm having a performance problem.
I chose the option to have a separate site for IdentityServer. I configured a database and changed the ConnectionString entries in the appsettings.json files of the Hosts project, Migration project, and the IdentityServer project. I followed all the instructions in the getting started tutorial.
Everything (eventually) works but each time I try to authenticate myself either to the Swagger site or the Angular website, there is a significant (minutes-long) delay before I am either logged in or the request times out.
Suspected Problem:
So I read that the site uses a redis cache during login. I have never used this technology before. I had to get that installed.
I used the following commands to pull down the image and run it in Docker - another technology that I have not used before:
PS C:\WINDOWS\system32> docker pull redis
Using default tag: latest
latest: Pulling from library/redis
a330b6cecb98: Pull complete
14bfbab96d75: Pull complete
8b3e2d14a955: Pull complete
5da5e1b21a2f: Pull complete
6af3a5ca4596: Pull complete
4f9efe5b47a5: Pull complete
Digest: sha256:e595e79c05c7690f50ef0136acc9d932d65d8b2ce7915d26a68ca3fb41a7db61
Status: Downloaded newer image for redis:latest
docker.io/library/redis:latest
PS C:\WINDOWS\system32> docker run --name development9-redis -d redis
eee1a05c90e7a492a19eab025fe307b17040ba35ea2f3bc5fbd5df1bab372028
This appeared to do something, so I assume my cache is running and available. Am I missing something? Could a misconfiguration of redis be the cause of my performance problem?
Please ask me any relevant questions you'd like and I will describe my set up. Thanks.
As you've pointed out, your performance issue is probably related to the improper Redis configuration. It really helps to downgrade response time.
You need to check the Redis running on port 6379, and also check does it get requests.
You might find useful this comment if you have a question about why I need to use Redis.
(Redis can help you to share data between IdentityServer and your host application.)
"run the command docker run --nameredis-container -p 6379:6379 -d redis and change the redis connection string in your appsettings to localhost:6379."
https://github.com/abpframework/abp/issues/3487#issuecomment-611208048
I have a dockerfile where I build an apache web server with some custom configurations etc.
Executing the Dockerfile I create an image that could be used in a deployment yaml file using Kubernetes.
Everything is working properly but after deployment, my apache service is down in every container of every pod.
Obviously I can access in every container to execute an /etc/init.d/apache2 start but this solution is not very smart..
So my question is: how can I set my custom apache to be running during the execution of the deploy yaml file?
PS: I tried this solution: with the dockerfile I created a docker container then I accessed on it and I started apache. Then I created a new image from this container (dockerfile commit + gcloud image push) but when I deploy the application I always find apache down
Well, first things first - I would very much recommend just using the official apache2 image and then making your custom configurations from there. They're documentation states this in the following paragraph:
Configuration
To customize the configuration of the httpd server, just COPY your custom configuration in as /usr/local/apache2/conf/httpd.conf.
FROM httpd:2.4
COPY ./my-httpd.conf /usr/local/apache2/conf/httpd.conf
However if you're dead-set on building everything yourself; you'll notice that inside of the Dockerfile for the official image they are copying in a BASH script and then setting this as the CMD option. This works because when running a Docker container you should be running a single process; this is why, as you stated, running it from it's service is a bad idea.
You can find the script they're running here, it's very short at 7 lines - so you shouldn't have too much trouble figuring out where to go from here.
Best of luck!
Traefik v1.3.1
Docker CE for Windows: 17.06.0-ce-win18 (12627)
I have the /acme folder routed to a host volume which contains the file acme.json. With the Traefik 1.3.1 update, I noticed that Traefik gets stuck in an infinite loop complaining that the "permissions 755 for /etc/traefik/acme/acme.json are too open, please use 600". The only solution I've found is to remove acme.json and let Traefik re-negotiate the certs. Unfortunately, if I need to restart the container, I have to remove acme.json again or I'm stuck with the same issue again!
My guess is that the issue lies with the Windows volume mapped to Docker but I was wondering what the recommended workaround would even be for this?
Can I change permissions on shared volumes for container-specific deployment requirements?
No, at this point, Docker for Windows does not enable you to control (chmod) the Unix-style permissions on shared volumes for deployed containers, but rather sets permissions to a default value of 0755 (read, write, execute permissions for user, read and execute for group) which is not configurable.
Traefik is not compatible with regular Windows due to the POSIX permissions check. It may work in the Windows Subsystem for Linux since that has a Unix-style permission system.
Stumbled across this issue when trying to get traefik running on Docker for Windows... ended up getting it working by adding a few lines to a dockerfile to create the acme.json and set permissions. I then built the image and despite throwing the "Docker image from Windows against a non-Windows Docker host security warning" when I checked permissions on the acme.json file it worked!
[
I setup a repo and have it auto building to the dockerhub here for further testing.
https://hub.docker.com/r/guerillamos/traefik/
https://github.com/guerillamos/traefikwin/blob/master/Dockerfile
Once I got that built I switched the image out in my docker-compose file and my DNS challenge to Cloudflare worked like a charm according to the logs.
I hope this helps someone!
I'm pretty new to Kubernetes and clusters so this might be very simple.
I set up a Kubernetes cluster with 5 nodes using kubeadm following this guide. I got some issues but it all worked in the end. So now I want to install the Web UI (Dashboard). To do so I need to set up authentication:
Please note, this works only if the apiserver is set up to allow authentication with username and password. This is not currently the case with the some setup tools (e.g., kubeadm). Refer to the authentication admin documentation for information on how to configure authentication manually.
So I got to read authentication page of the documentation. And I decided I want to add authentication via a Static Password File. To do so I have to append the option --basic-auth-file=SOMEFILE to the Api server.
When I do ps -aux | grep kube-apiserver this is the result, so it is already running. (which makes sense because I use it when calling kubectl)
kube-apiserver
--insecure-bind-address=127.0.0.1
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota
--service-cluster-ip-range=10.96.0.0/12
--service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem
--client-ca-file=/etc/kubernetes/pki/ca.pem
--tls-cert-file=/etc/kubernetes/pki/apiserver.pem
--tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem
--token-auth-file=/etc/kubernetes/pki/tokens.csv
--secure-port=6443
--allow-privileged
--advertise-address=192.168.1.137
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
--anonymous-auth=false
--etcd-servers=http://127.0.0.1:2379
Couple of questions I have:
So where are all these options set?
Can i just kill this process and restart it with the option I need?
Will it be started when I reboot the system?
in /etc/kubernetes/manifests is a file called kube-apiserver.json. This is a JSON file and contains all the option you can set. I've appended the --basic-auth-file=SOMEFILE and rebooted the system (right after the change of the file kubectl wasn't working anymore and the API was shutdown)
After a reboot the whole system was working again.
Update
I didn't manage to run the dashboard using this. What I did in the end was installing the dashboard on the cluster. copying the keys from the master node (/etc/kubernetes/admin.conf) to my laptop and did kubectl proxy to proxy the traffic of the dashboard to my local machine. Now I can access it on my laptop through 127.0.0.1:8001/ui
I just found this for a similar use case and the API server was crashing after adding an Option with a file path.
I was able to solve it and maybe this helps others as well:
As described in https://kubernetes.io/docs/reference/setup-tools/kubeadm/implementation-details/#constants-and-well-known-values-and-paths the files in /etc/kubernetes/manifests are static pod definitions. Therefore container rules apply.
So if you add an option with a file path, make sure you make it available to the pod with a hostPath volume.
I am developing a project and that requires ldap validation. But, I don't have a developer/qa ldap server.
Does a small ldap server exist for windows for testing/development?.
I just want to test to validate a active account and detect if it is blocked or not, so i don't want to install a whole domain to do that.
---never mind---
I tried an compiled openldap but I was unable to understand it. Simply, I don't get how to connect to it, how to create a account and how to validate, the client ldap returned me some obfuscate error message, i tried several ways to do it and finally i give up.
Finally, i installed a domain, it was absurdly easy to install (2008 r2), restart the server and that's it.
Anyways, thanks for the advice of opendlap and aldps
If you're on Windows and use Active Directory, have a look at Active Directory Lightweight Directory Services (AD LDS) - a LDAP server you can install and use on your dev machine.
The open source LDAP server from OpenLDAP should give you what you need:
http://www.openldap.org/
Apache provide a directory server called "ApacheDS"(Apache Directory Server), and it provides a GUI management client called "Apache Directory Studio" which is based on Eclipse.
If you want to have a test only, this studio provides a built-in server for your test, easy to link.
You can also install the studio directly in Eclipse using this update site: http://directory.apache.org/studio/update/2.x/
Active Directory works fine as an LDAP server and its included in the Windows Server 2008 trial. See the answer to my question Testing LDAP Connections to Active Directory Server. I have it running in a KVM virtual machine on Linux and query it from an OpenLDAP based client.
Necromancing.
I've had the same problem.
OpenDS is very easy to get up and running, and doesn't require administrator rights.
You just need to download the ZIP file and run the installer.
The installer can populate the directory with test entries, too - if you want to see some example data.
That's exactly what you're looking for when wanting a simple dev test server.
Note:
OpenDS development has seized, and was forked into OpenDJ, a commercial project by forgerock.
While OpenDS still works on Java7, only OpenDJ will work with Java8.
However, OpenDJ is still FREE and OpenSource.
You can find the sourcecode here on Bitbucket
and you can grab it with git:
git clone https://stash.forgerock.org/scm/opendj/opendj.git
Forget OpenLDAP and AD-LDS; these are way too complicated for simple testing.
In addition, their user interface is horrible, and you need something that you can get up and running FAST, without admin rights, and have it populated with test data in a few minutes, not in a few weeks.
And ApacheDS will require administrator privileges, unfortunately (because it only works as windows service, and you can't start/stop these without being administrator).
So OpenDJ is the definite way to go.
Apache Directory Studio is a good client to browse, edit and import/export data via LDAP (LDIF).
However, despite Apache Directory Studio being written in Java, it adds a dependency to gtk, and only has binaries for x86/x64, which means it won't work on a Chromebook with ARM processor, or on a RaspberrryPI.
But with the test entries added automagically in OpenDJ/OpenDS (if you choose the option), you don't even need that.
When in doubt, use a web based interface that "talks LDAP".
Try OpenDS it is very simple and requires only Java.
You could roll your own LDAP server for testing pretty easily using godap: https://github.com/bradleypeabody/godap
It's written in Go. It's very small and simple.
You would basically need to copy the server example out of godap_test.go and wire it up however you need.
Try simple-ldap-server
I know its pretty late to answer this question. But for the reference of someone who runs into the same question.
I wrote a simple ldap server(using ldapjs on nodejs) for authentication testing purposes. Please feel free to use it. It's easy to configure. Can support both LDAP/LDAPS protocols, just require a json file including the user ids you want to add(or it comes with a pre-included users json file, which you can use if you want).
The project is on github. (I'll add a docker image too)
Feel free to visit and use
Docker image
Simple Ldap Server Git
OpenLDAP. Ships with most Unixes and Linuxes. For Windows it is available from several sources:
Cygwin
http://www.userbooster.de
as the Silver (free) edition of the CDS product http://www.symas.com/cds.shtml. This is crippled compared to the Userbooster version, which is complete.
You can use a Docker container with Samba as Domain controller, here I show how to setup one in just a few minutes
Basically you need to
Create an image with this (read the post if you want to know why)
$ git clone https://github.com/padiazg/alpine-samba-ad-container.git
$ cd alpine-samba-ad-container
# replace your-user with your username
$ docker build -t your-user/alpine-samba-ad-container .
Create some folders and files to persist the container data
mkdir /tmp/krb-conf
&& mkdir /tmp/krb-data
&& mkdir /tmp/smb-conf
&& modir /tmp/smb-data
&& touch /tmp/krb-conf/krb5.conf
Run the container
docker run -d \
-e SAMBA_ADMIN_PASSWORD=a-secure-password \
-e SAMBA_DOMAIN=local \
-e SAMBA_REALM=local.your-domain.io \
-e LDAP_ALLOW_INSECURE=true \
--mount type=bind,source=/tmp/krb-conf/krb5.conf,target=/etc/krb5.conf \
--mount type=bind,source=/tmp/krb-data,target=/var/lib/krb5kdc \
--mount type=bind,source=/tmp/smb-conf,target=/etc/samba \
--mount type=bind,source=/tmp/smb-data,target=/var/lib/samba \
-p 389:389 \
--name smb4ad \
your-user/alpine-samba-ad-container
And now you are good to go