I need to install mbstring (and a few other extensions) for PHP on Cloudbees. Is this possible?
Note that I'm using an updated PHP version as described here:
https://developer.cloudbees.com/bin/view/DEV/PHP+Builds
I don't think scripts have sudo access, so I can't simply use the package manager. I don't think these extensions exist as PEAR packages either. So I'm stumped.
Here is the response from Cloudbees support. Seems to work fine, just make sure you don't have any spaces in your Jenkins build path!
Our provided PHP versions don't have mbstring module activated. You will need to build your own PHP version to get it. To be sure your custom PHP build works on Cloudbees slave, you can build it with a Jenkins job on your instance (with various --with-XXX or --without-XXX options).
We are ourselves doing something like this with a script like
# Download
regex='.*(RC|alpha|beta).*'
if [[ $version =~ $regex ]]; then
wget http://downloads.php.net/dsp/php-${version}.tar.bz2
else
wget http://us3.php.net/distributions/php-${version}.tar.bz2
fi
# Unpack
tar xjf php-${version}.tar.bz2
# Build
cd php-${version}
./configure --prefix=/home/jenkins/tools/php/${php_name} \
--with-curl --with-openssl
make && make install
As a side node, you should also take care of specifying a good installation prefix with --prefix. I would choose something like /home/jenkins/tools/php/5.4/.
To store compiled php engine you could generate a tar.gz//bz2 file of target installation directory. Then, store it in your WebDAV directory, which is accessible in /private/{account}/ during a build when "Mount CloudBees DEV#cloud Private WebDav Repository" is checked.
You should add a first step to jobs requiring PHP to extract this archive. As Jenkins workspace is usually cached on DEV#Cloud, you can extract the archive only if it's not already there. That will speed up your build.
Related
I try to install dropbox on Centos8, however Terminal gives strange errors. Tried different commands, same error.
Firstly downloaded *.rpm file from dropbox website, currently trying to install it.
Commands I tried:
rpm -ivh nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
yum localinstall nautilus-dropbox-2020.03.04-1.fedora.x86_64.rpm
Error:
Last metadata expiration check: 0:18:27 ago on Thu 12 Mar 2020 03:46:17 PM EET
Error:
Problem: conflicting requests
nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
[root#localhost Downloads]
Also tried --skip-broken and --nobest - but no luck.
Also tried sudo yum install libgnome but it gives error:
Last metadata expiration check: 9:51:39 ago on Thu 12 Mar 2020 02:42:06 PM UTC.
No match for argument: libgnome
Error: Unable to find a match: libgnome
I have:
[adminuser#localhost ~]$ cat /etc/centos-release
CentOS Linux release 8.1.1911 (Core)
Tried to google this mistake, but no luck. Could you please give me any hint how I could overcome this?
Thank you
This is a bug in packaging. Contact Dropbox support and report it as a bug.
Technical details (just in case you are Dropbox employee):
During building rpm, when you use macro then it is expanded. Try yoursel:
$ rpm --eval '%{_bindir}'
/usr/bin
However, when the macro is not defined, you get original value:
$ rpm --eval '%{some_bullshit}'
%{some_bullshit}
So the macro gnome_version should likely contain some version, but this macro was not defined.
nothing provides libgnome
"libgnome" is about libgnome-2 → https://linux.dropbox.com/fedora/ → I.e. Fedora only packages. CentOS 8 has no libgnome* available.
https://www.dropbox.com/install-linux → Compile from source → CentOS 8
# dnf install nautilus-devel-3.28.1-10.el8.x86_64 python3-docutils
tar xvf nautilus-dropbox-2020.03.04.tar.bz2
cd nautilus-dropbox-2020.03.04/
./configure && make
# make install
Result : nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm https://drive.google.com/file/d/1AcxlVdbWOzQvcoVOFYCiaVny9MzgC-Ea/view?usp=sharing
# rpm -Uvh nautilus-dropbox-2020.03.04-1.el8.x86_64.rpm : No issues.
First, realize that the command showing at the install page is for the headless installation. It will probably work, but my preference is to use Dropbox with nautilus integration.
This instructions assumes a installation of Dropbox with Nautilus integration.
We need to compile the installer from source.
a. Download last package
wget https://linux.dropbox.com/packages/nautilus-dropbox-2020.03.04.tar.bz2
b. Extract tarball
tar xjf ./nautilus-dropbox-2020.03.04.tar.bz2
c. Try to compile
cd nautilus-dropbox-2020.03.04; ./configure;
Then you get an Error:
Erro:
Problema: conflicting requests
- nothing provides libgnome >= %{gnome_version} needed by nautilus-dropbox-2020.03.04-1.fc21.x86_64
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
Now we need to install nautilus-devel and python3-docutils
NOTE: You will get configure: error: couldn't find docutils if forget python3-docutils.
This command will enable the PowerTools repository and install what is needed:
dnf --enablerepo=PowerTools install nautilus-devel python3-docutils
Now you can run ./configure && sudo make install
That's it. Go for the start menu type "Dropbox", it will start the installer.
Restore a local backup of Dropbox (optional)
If you have a local backup, turn of the network after you see the Dropbox folder created. Then copy all your files to that folder and turn it on after copy.
This solution worked for me running CentOS Linux release 8.2.2004 (Core).
I'm behind a firewall and I have the fzf.tar.gz package which has the content of the git repo. How can I install fzf offline?
The install command ~/.fzf/install is reaching out to github.com. I'm on Redhat with no internet connection.
https://github.com/junegunn/fzf
This is just what I observed, I can't guarantee I didn't miss anything:
First, clone fzf to FZF_DIR on an online PC, then,
I'd suggest you to execute 'install' on an online PC to get necessary files
~/.fzf/bin/fzf this one is downloaded by install script
~/.fzf.bash this one is generated by install script
cp ~/.fzf/bin/fzf $FZF_DIR/bin
copy $FZF_DIR (with fzf binary in it) and .fzf.bash into your offline PC
ln -s $FZF_DIR ~/.fzf
source .fzf.bash in your .bashrc
Entire FZF_DIR is needed because it includes some other useful scripts sourced by .fzf.bash.
Is it possible to generate a .hex file with MicroPython and my own python program code at a Linux command line, rather than in one of the editors?
Looking at the tag in your question, it looks like you want to use MicroPython on the BBC micro:bit, correct?
If that's the case then youu can use this Python command line tool: https://github.com/ntoll/uflash/
Instructions on how to install it and use it can be found in the README at that link.
This works with Python 2 and 3, and your Linux distribution is very likely to have at least one Python version available out-of-the-box.
If you have pip installed you can easily install it with: pip install uflash
But you can also download the source code, using git or downloading a zip file from GitHub (https://github.com/ntoll/uflash/archive/master.zip), and run it without installing anything. In this case you can execute the uFlash script with Python:
python uflash.py path_to_your_code.py
And the current version of uFlash includes the latest version of MicroPython for the micro:bit.
You can write the micropython code for the microbit in any text editor, such as vscode or vim. Save it as a .py file.
To create the .hex file, use the py2exe tool that is installed along with uflash when you install uflash using the command:
pip install uflash
To create a .hex file for a microbit micropython file called hello.py:
py2hex hello.py
This creates a file called hello.hex. This can be dragged and dropped onto your connected microbit through the file explorer. I use Nautilus and the microbit appears as 'MICROBIT'.
You can automate the creation and loading of the .hex file to the microbit using uflash, e.g.
uflash hello.py
This will create the .hex file and then load it onto an attached microbit. The .hex file will not be left on your file system though. The microbit has a habit of no longer being attached to the file system after loading a .hex file and needs to be re-attached in between builds.
Working Ubuntu 22.04 host CLI setup with Carlos Atencio's Docker to build your own firmware
After trying to setup the toolchain for a while, I finally decided to Google for a Docker image with the toolchain, and found https://github.com/carlosperate/docker-microbit-toolchain at this commit from Carlos Atencio, a Micro:Bit foundation employee, and that just absolutely worked:
# Get examples.
git clone https://github.com/bbcmicrobit/micropython
cd micropython
git checkout 7fc33d13b31a915cbe90dc5d515c6337b5fa1660
# Get Docker image.
docker pull ghcr.io/carlosperate/microbit-toolchain:latest
# Build setup to be run once.
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest yt target bbc-microbit-classic-gcc-nosd#https://github.com/lancaster-university/yotta-target-bbc-microbit-classic-gcc-nosd
docker run -v $(pwd):/home --rm ghcr.io/carlosperate/microbit-toolchain:latest make all
sudo chmod -R +666 .
# Build one example.
tools/makecombinedhex.py build/firmware.hex examples/counter.py -o build/counter.hex
# Build all examples.
for f in examples/*; do b="$(basename "$f")"; echo $b; tools/makecombinedhex.py build/firmware.hex "$f" -o "build/${b%.py}.hex"; done
And you can then flash the example you want to run with:
cp build/counter.hex "/media/$USER/MICROBIT/"
What uflash does it to ship its own precompiled firmware.hex which is the part that requires the toolchain, and it then just uses that to build the combined hex in Python.
The cool thing is that now that we have the toolchain, we can also create examples directy in C/C++/assembly: How to compile C/C++ code into a .hex file for the BBC micro:bit? which can likely run much faster.
Previous failed attempts at setting it up myself
The Yotta package manager used by BBC Microbit bit rot almost immediately after it got was discontinued, making pip install yota approaches like: https://flames-of-code.netlify.app/blog/microbit-cpp-1/ very difficult.
The GCC gcc-arm-embedded toolchain PPA ppa:team-gcc-arm-embedded/ppa has also been discontinued: https://askubuntu.com/questions/1243252/how-to-install-arm-none-eabi-gdb-on-ubuntu-20-04-lts-focal-fossa and now you would have to download from an arm.com website.
Atencios' Docker setup explains how to do it though: https://github.com/carlosperate/docker-microbit-toolchain/blob/master/Dockerfile , the key is likely using his magically crafted requirements.txt, likely kept back from the day when things really worked, to avoid the infinitely many dependency issues of yotta. He's on Ubuntu 20.04.
I want to write a Dockerfile which exports a directory from a remote Subversion repository into the build context so I can work with these files in subsequent commands. The repository is secured with user/password authentication.
That Dockerfile could look like this:
# base image
FROM ubuntu
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# export my repository
RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory
# further commands, e.g. on container start run a file just downloaded from the repository
CMD ["/bin/bash", "path/to/file.sh"]
However, this has the drawback of printing my username and password on the screen or any logfile where the stdout is directed, as in Step 2 : RUN svn export --username=myUserName --password=myPassword http://subversion.myserver.com/path/to/directory. In my case, this is a Jenkins build log which is also accessible by other people who are not supposed to see the credentials.
What would be the easiest way to hide the echo of username and password in the output?
Until now, I have not found any way how to execute RUN commands in a Dockerfile silently when building the image. Could the password maybe be imported from somewhere else and attached to the command beforehand so it does not have to be printed anymore? Or are there any methods for password-less authentication in Subversion that would work in the Dockerfile context (in terms of setting them up without interaction)?
The Subversion Server is running remotely in my company and not on my local machine or the Docker host. To my knowledge, I have no access to it except for accessing my repository via username/password authentication, so copying any key files as root to some server folders might be difficult.
The Dockerfile RUN command is always executed and cached when the docker image is build so the variables that svn needs to authenticate must be provided at build time. You can move the svn export call when the docker run is executed in order to avoid this kind of problems. In order to do that you can create a bash script and declare it as a docker entrypoint and pass environment variables for username and password. Example
# base image
FROM ubuntu
ENV REPOSITORY_URL http://subversion.myserver.com/path/to/directory
# install subversion client
RUN apt-get -y update && apt-get install -y subversion
# make it executable before you add it here otherwise docker will coplain
ADD docker-entrypoint.sh /enrypoint.sh
ENTRYPOINT /entrypoint.sh
docker-entrypoint.sh
#!/bin/bash
# maybe here some validation that variables $REPO_USER $REPO_PASSOWRD exists.
svn export --username="$REMOTE_USER" --password="$REMOTE_PASSWORD" "$REPOSITORY_URL"
# continue execution
path/to/file.sh
Run your image:
docker run -e REPO_USER=jane -e REPO_PASSWORD=secret your/image
Or you can put the variables in a file:
.svn-credentials
REPO_USER=jane
REPO_PASSWORD=secret
Then run:
docker run --env-file .svn-credentials your/image
Remove the .svn-credentials file when your done.
Maybe using SVN with SSH is a solution for you? You could generate a public/private key pair. The private key could be added to the image whereas the public key gets added to the server.
For more details you could have a look at this stackoverflow question.
One solution is to ADD the entire SVN directory you previously checked out on your builder file-system (or added as a svn:externals if your Dockerfile is itself in a SVN repository like this: svn propset svn:externals 'external_svn_directory http://subversion.myserver.com/path/to/directory' ., then do a svn up).
Then in your Dockerfile you can simply have this:
ADD external_svn_directory /tmp/external_svn_directory
RUN svn export /tmp/external_svn_directory /path/where/to/export/to
RUN rm -rf /tmp/external_svn_directory
Subversion stores authentication details (if it not disabled in configuration) at client side and use stored username|password on request for the subsequent operations on the same URL.
Thus - you have to run (successful) svn export in Dockerfile with username|password only once and allow SVN to use cached credentials (remove auth. options from command-line) later
I have a package that is using the autotools to build and install.
Part of the package is a website that can be run on the local machine.
So in the package there is a .conf file that is meant to be either
copied or linked to the /etc/apache2/conf.d directory. What's the
standard way that packages would do this? If possible, I'd like for
the user not to have an extra step to make the website work. I'd like
to have them install the package and then be able to browse to
http://localhost/newpackage to get up and running.
Also, is there a way that autoconf knows about the apache install or a
standard way through then environment some how? If someone could
point me in the right direction that would be great.
Steve
The first thing you should do is to locate the apache extension tool apxs or apxs2 (depends on apache version and/or platform you are building for). After you know where your tool is located you can run queries to get certain apache config params. For example to get system config dir you can run:
apxs2 -q SYSCONFDIR
Here is a snippet of how you can locate apache extension tool: (be careful it may contain syntax errors)
dnl Note: AC_DEFUN goes here plus other stuff
AC_MSG_CHECKING(for apache APXS)
AC_ARG_WITH(apxs,
[AS_HELP_STRING([[--with-apxs[=FILE]]],
[path to the apxs, defaults to "apxs".])],
[
if test "$withval" = "yes"; then
APXS=apxs
else
APXS="$withval"
fi
])
if test -z "$APXS"; then
for i in /usr/sbin /usr/local/apache/bin /usr/bin ; do
if test -f "$i/apxs2"; then
APXS="$i/apxs2"
break
fi
if test -f "$i/apxs"; then
APXS="$i/apxs"
break
fi
done
fi
AC_SUBST(APXS)
The way to use APXS in your automake Makefile.am would look something like this:
## Find apache sys config dir
APACHE2_SYSCONFDIR = `#APXS# -q SYSCONFDIR`
## Misc automake stuff goes here
install: install-am
cp my.conf $(DESTDIR)${APACHE2_SYSCONFDIR}/conf.d/my.conf
I assume you are familiar with automake and autoconf tools.