Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I wanted to generate a showroom page for my jupyter notebooks using a continuous integration tool like Travis.
A perfect example is Shogun's showroom page but I don't know what they are using.
we are using buildbot to generate it; here's the job for it:
http://buildbot.shogun-toolbox.org/builders/nightly_default/builds/44
the content of the generate notebook is pretty straightforward:
#!/bin/bash
export PYTHONPATH=$PWD/build_install/lib/python2.7/dist-packages:$PYTHONPATH
export LD_LIBRARY_PATH=$PWD/build_install/lib:$LD_LIBRARY_PATH
find $1 -type f \( -name '*.ipynb' ! -name 'template.ipynb' \) | xargs -I{} cp '{}' $3
find $1 -type f \( -name '*.ipynb' ! -name 'template.ipynb' \) | xargs -P $2 -I{} jupyter nbconvert --ExecutePreprocessor.timeout=600 --to html --output-dir $3 --execute '{}'
find $3 -type f -name '*.html' | xargs -P $2 -I{} python extract_image_from_html.py '{}'
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 12 months ago.
Improve this question
I'm attempting to start docker and postgresql automatically with my ubuntu wsl2 instance. I read about the /etc/wsl.conf configuration file and it only starts one service, not two. For example if I have:
[boot]
command = service docker start
and restart wsl.. I get the following:
mryan ~ $service docker status
* Docker is not running
mryan ~ $service postgresql status
12/main (port 5432): online
Again, if I remove the last line from etc/wsl.conf and restart wsl. Docker starts just fine. I've also tried quotes around the commands as in command="service docker start" but it didn't make a difference. Is there some format error I'm making here? Any help would be appreciated. I can get around this by manually starting services but it would be nice to make things work properly!
Try combining the commands into a single line maybe, with &&.
One still can start it on demand, eg. with .bashrc or .zshrc:
RUNNING=`ps aux | grep dockerd | grep -v grep`
if [ -z "$RUNNING" ]; then
sudo dockerd > /dev/null 2>&1 &
disown
fi
This may require group docker:
sudo usermod -a -G docker $USER
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I need to programmatically execute long running script on a remote server. I tried to use ssh with screen or tmux and so far I could not make it work.
With tmux I managed to make it work when typing the ssh command from my local machine terminal:
ssh <server_name> -t -t tmux new -s my_session \; set-buffer "bash my_script.sh" \; paste-buffer \; send-keys C-m \; detach
But if I run this programmatically I get this error:
open terminal failed: missing or unsuitable terminal: unknown
Connection to <server_name> closed
Use the -d flag to new-session to start tmux detached. So:
ssh <server_name> tmux new -ds my_session \; send-keys "bash my_script.sh" C-m
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to get into tar file the stdout generated from Mysqldump:
mdm#deb606:~$ mysqldump --opt test1 -u root -ppassword | tar -czf example.tar.gz
doesn't work.
At the moment I've temporary solved using:
mdm#deb606:~$ mysqldump --opt test1 -u root -ppassword | gzip -f > example.gz
Is it possible do the the same using also tar or bzip2?
I don't know that it's possible to pipe directly into tar (in general, that doesn't make a lot of sense), however the bzip2 command will accept - to mean to read from stdin, i.e.:
mdm#deb606:~$ mysqldump --opt test1 -u root -ppassword | bzip2 - > example.bz2
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
So this command will replace abc with XYZ in file.txt in directory tmp
sed -ie 's/abc/XYZ/g' /tmp/file.txt
How do you do a find and replace like this across a large number of files in a directory with a .html extension in one go?
find /start/path -name *.html -exec sed -ie 's/abc/XYZ/g' '{}' \;
As by your request, here is what it does:
find /start/path -name *.html
Finds all files that glob to *.html, starting in /start/path
The -exec option tells find, to not just print out the files, but to run a command on them. Inside this command {} is replaced by the file. The -exec option hast to end with a semicolon, which we have to escape with a backslash, else bash will swallow it.
Again, from the OP's special situation: Put the following into a file called replaceabc.sh
#!/bin/bash
find '/home/129224/domains/sandpit.uk-cpi.com/html/sshit' -name '*.html' -exec sed -ie 's/abc/XYZ/g' '{}' \;
then from the shell prompt
chmod 700 /path/to/replaceabc.sh
/path/to/replaceabc.sh
Does anyone know of a free online tool that can crawl any given website and return just the Meta Keywords and Meta Description information?
Assuming you have access to Linux/Unix:
mkdir temp
cd temp
wget -r SITE_ADDRESS
Then, for keywords:
egrep -r -h 'meta[^>]+name="keywords' * | sed 's/^.*content="\([^"]*\)".*$/\1/g'
and for descriptions:
egrep -r -h 'meta[^>]+name="description' * | sed 's/^.*content="\([^"]*\)".*$/\1/g'
If you want all the unique keywords, try:
egrep -r -h 'meta[^>]+name="keywords' * | sed 's/^.*content="\([^"]*\)".*$/\1/g' | sed 's/\s*,\s*/\n/g' | sort | uniq
I'm sure there's a one-liner or program out there that does this exact thing, and there are definitely easier answers.
To retrive all meta information try this tool Meta Tags Analyzer