I am trying to install django in CentOS 7 cloud server. For that I have installed Virtual environment by using python -m venv env. And environment installed successfully, but it is not activating when I used the shell command source env/bin/activate. The name of the current virtual environment is NOT appear on the left of the prompt. So I understand virtual environment is not active. My commands in terminal given below.
user#libserver.libsoft.in /$ python3.8 -V
Python 3.8.0
user#libserver.libsoft.in /$ which python3.8
/usr/local/bin/python3.8
user#libserver.libsoft.in /$ python3.8 -m venv /home/LPython/my_env
user#libserver.libsoft.in /$ cd /home/LPython/my_env
user#libserver.libsoft.in /home/LPython/my_env$ source bin/activate
I created my_env in /home/LPython/. Any problem with this path?
This problem is solved. Actually I was not using BASH shell, even if the echo $SHELL outputs bin/bash. Because of this output, I thought i was using using BASH shell. Then I login through SSH. If you are using Windows must login through PUTTY. Then I could activate my virtual environment through source my_env/bin/activate
Related
I have previously installed Apache on my Mac Mini using Homebrew, but I'm currently using MAMP. When I issue the terminal command httpd -S to check Apache configurations, it checks the Homebrew configurations. Is there a way I can test the configurations for MAMP? I would like to use the same httpd -S command, but if there's another preferred way to do it for MAMP, that's fine too.
You do it with the -f flag, like this:
httpd -f /Applications/MAMP/conf/httpd.conf -S
This is in httpd options:
Options:
...
-f file: specify an alternate ServerConfigFile
First of all, I would like to say that I'm new to Docker and all that is around it.
I have been wanting to make a container where I have Apache, php and Firebird installed. So far, so good ; everything seems to work and I can get my default page when I type in my Internet search bar my ip address and :8080. I do so by first starting my container like this :
docker run -p 8080:80 -d apps
Where "apps" is the name of my container.
I have achieved this with my Dockerfile, which looks like this (it might be a bit messy, still learning the good practices !) :
# Download of base image - ubuntu 20.04
FROM ubuntu:20.04
# Updating/upgrading
RUN apt-get update -y && apt-get upgrade -y
# Installing apache2, php and firebird with modules
RUN DEBIAN_FRONTEND="noninteractive" apt-get install apache2 php libapache2-mod-php -y && \
apt-get install php-curl php-gd php-intl php-json php-mbstring php-xml php-zip -y && \
DEBIAN_FRONTEND="noninteractive" apt-get install firebird3.0-server -y && apt-get install firebird->
# Start up apache in foreground by default
CMD /usr/sbin/apache2 -D FOREGROUND
ENTRYPOINT service apache2 restart && /bin/bash
# Expose apache
EXPOSE 80
Now, my idea was to export this container to another computer and try the same thing. I have followed a few tutorials and got to import my container on the new machine. My problem here is that somehow, the command I previously used doesn't work ; it shows me this error :
docker: Error response from daemon: No command specified.
See 'docker run --help'.
Which is odd, because it works just fine on the other machine. I also did this command, WHICH WORKS :
docker run -i -t -p 8080:80 apps /bin/bash
This one works alright, but I don't want to have to access the bash everytime I want my Apache page to load. I would want my container to run without me having to get in my container, if that makes sense.
In my opinion, it probably comes from the fact that I only loaded the container, and not the image used to build it (maybe a bad practice? Couldn't find anything about it on google).
Here is my setup just in case ---
On the first machine (which is the one where I created the image and the container) :
Ubuntu 20.04 LTS
Apache/2.4.41
Docker 19.03.8
On the other machine which I'm trying to make my container work :
Ubuntu 18.04 LTS
Apache/2.4.29
Docker 19.03.6
Thank you for your patience and time !
apps is your docker image, if you want to give name for your container you can specify --name in the run command ie,
docker run --name container_name -p 8080:80 -d apps
You can use sudo docker save -o apps.tar apps to create a tar file of the image
then change the root permission of the tar file sudo chmod 777 apps.tar
Copy this tar file to the other system you want to try, then
sudo docker load --input apps.tar
This will load the image, then you can use the previous command to start the container
docker run -p 8080:80 -d apps
Where "apps" is the name of my container. <- This statement is incorrect and perhaps the misunderstood concept that leads you to the problem.
apps is the name of the image, not the name of the container. On the host on which you can run the container, you must have built that image from the Dockerfile that you shared using the command:
docker build -t apps .
Copy the Dockerfile on the host where you cannot run the container, built the image in-there as well and try again running the container.
I'm trying to get unison working after upgrading to Mac OS X Catalina. Unfortunately, macports installs a more recent version of ocaml (4.08.1), which means that the unison 2.51.2 release won't compile.
Well, that's no problem, I just update to git master on unison, and recompile. Unfortunately, this fails at sync time because the version of ocaml used to compile on the mac (4.08.1) is different from the one used to compile on the other machine (4.07.1). Sigh. Okay, use opam magic to install 4.07.1 on my machine. Everything should be fine, right? No!
Here's the error:
Connected [//zzzmyhost//home/clements/unison-home -> //zzzmyotherhost//Users/clements/clements]
Looking for changes
Uncaught exception Failure("input_value: ill-formed message")
Raised at file "/private/tmp/unison/src/lwt/lwt.ml", line 126, characters 16-23
Called from file "/private/tmp/unison/src/lwt/generic/lwt_unix_impl.ml", line 102, characters 8-23
Called from file "/private/tmp/unison/src/update.ml" (inlined), line 2105, characters 2-69
Called from file "/private/tmp/unison/src/uitext.ml", line 978, characters 16-56
Called from file "/private/tmp/unison/src/uitext.ml", line 1066, characters 6-90
Called from file "/private/tmp/unison/src/uitext.ml", line 1088, characters 19-66
Called from file "/private/tmp/unison/src/uitext.ml", line 1144, characters 21-43
What's going on?
Sigh... the problem here (very non-obvious) is actually with a corrupted/wrong-format syncronization file, created when doing the failed sync in the earlier test.
The solution is just to go into ~/Library/Application Support/Unison (on a UNIX machine this path would presumably live in ~/.unison and delete the archive file that's causing the problem (probably the most recent one). In a pinch, just delete all of the archive files and start over.
I've got the same problem between Windows and Ubuntu 20.04 after upgrading from Ubuntu 18.04. I tried the binary from Ubuntu 18.04 in 20.04, which still fails, so the incompatibility is likely inside one of the dependencies.
As a workaround I created a Docker image based on Ubuntu 18.04:
FROM ubuntu:18.04
RUN apt-get update && apt-get dist-upgrade -y
RUN apt-get install unison -y
RUN useradd martin --home /home/martin
WORKDIR /home/martin
USER martin
Building it with docker build -t unison:18.04 .
And then I added a wrapper to ~/bin/unison-2.48.4-docker:
#!/bin/bash
docker run --rm -i \
-v /home/martin/dirtosync:/home/martin/dirtosync \
-v /home/martin/.unison:/home/martin/.unison \
--hostname $(hostname) \
unison:18.04 unison "$#"
Setting the --hostname is important, since the hostname is part of the archive file.
Inside the profile on my Windows machine I configured:
servercmd = ~/bin/unison-2.48.4-docker
In my setup with two windows clients and one Ubuntu 18.04 server, connected by ssh, the problem startet with a second server running on Ubuntu 20.04. Neither the old server nor the windows clients could sync with the new machine.
My solution: Copying the binary from Ubuntu 18.04 to a new directory in the Ubuntu 20.04 machine. This new file is referenced in the "authorized_keys" file of ssh on the new machine.
So far, everything works great with unison 2.48.4.
I want to build a container from my conda environment following this post. However, I get the following error: '/bin/sh: 1: cannot create ~/.bashrc: Directory nonexistent'. I am using a vagrant VM to build my image and would be grateful for any help.
Editing the .bashrc, aside from failing, will not be helpful as the shell loaded by singularity is explicitly --norc. You want to use the $SINGULARITY_ENVIRONMENT variable in %post to have the values available.
Something along these lines:
%post
# You may need to install some pre-reqs your host system has installed outside of conda, e.g.
# apt update && apt install -y build-essential make zlib
ENV_NAME=$(head -1 environment.yml | cut -d' ' -f2)
echo ". /opt/conda/etc/profile.d/conda.sh" >> $SINGULARITY_ENVIRONMENT
echo "conda activate $ENV_NAME" >> $SINGULARITY_ENVIRONMENT
. /opt/conda/etc/profile.d/conda.sh
conda env create -f environment.yml -p /opt/conda/envs/$ENV_NAME
I listed a few libraries that you probably have installed in your current machine that might not be installed in the slim docker image. You can install them via apt or conda, depending on your preference. If it does happen though, it'll be specific to your environment.yml and host OS, so you'll have to iterate through until the build succeeds.
I'm trying to get started on libvirt with VirtualBox as a virtualization solution. I installed everything and VirtualBox itself is running when using their VBoxHeadless command.
However, libvirt fails to connect to VirtualBox:
# virsh -c vbox:///session
libvir: error : could not connect to vbox:///session
error: failed to connect to the hypervisor
I could not find any hints in the libvirt documentation that point to whether I have to make any domain specific configuration before using virsh.
Does anyone have a hint? Or even better, maybe a tutorial that works through the way of using libvirt, virsh or it's APIs (my later goal) from the ground up.
If you are doing this on Ubuntu, then the problem is their libvirt package is built without VirtualBox support.
You can rebuild the package with support very easily. Something like:
apt-get source -d libvirt
sudo apt-get build-dep libvirt
dpkg-source -x libvirt*dsc
Go into the libvirt directory and edit debian/rules so that instead of --without-vbox it says --with-vbox. You can add an entry to the top of debian/changelog so the package is compiled as a different version (e.g., append ~local1 to the version).
dpkg-buildpackage -us -uc -b -rfakeroot
You'll get new .debs built in the directory above. Use dpkg -i to install the relevant ones (libvirt0, libvirt0-bin, and whatever else you want).
Double-check whether or not you have write access to /var/run/libvirt/libvirt-sock.
The socket file should have permissions similar to:
$ sudo ls -la /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 2010-08-24 14:54 /var/run/libvirt/libvirt-sock
I think it could be helpful also to increase the libvirt logging capabilities by running this in your shell:
export LIBVIRT_DEBUG=1
There is Ubuntu PPA for libvirt with VirtualBox support: https://launchpad.net/~cxl/+archive/ubuntu/libvirt