How set password in scrapinghub/splash docker installation? - scrapy

I'm using the splash on an ubuntu server and followed the instructions to install with docker (https://github.com/scrapy-plugins/scrapy-splash).
docker run -p 8050: 8050 scrapinghub / splash
How can I change the settings and set username and password?

The easiest way is to use Aquarium with the auth_user and auth_password options.
This is described in How to run Splash in production?, from the Splash FAQ.

When it comes to integrating Aquarium-Splash and Scrapy, the user and password could be passed under scrapy crawl using the -a option as per official documentation of Scrapy.

Related

Alpine Linux: How-To enforce user to change the password on next login?

Is it possible to enforce that a user have to change the password after the next login in Alpine Linux?
Background:
We deploy a bunch of Alpine Linux VMs for IaaS. We use packer for that. During the configuration all machines gets the same password (from the script).
Links for other Linux distributions
tecmint.com
cybercity.biz
I have tried chage -d 0 {user-name} here is the result that alpine do not know the command. The package manager (apk add), was not able to find the package.
Also I tried passwd --expire {user-name}. passwd is a valid command but the option -e or --expire is invalid.
chage is part of the shadow package, so installing this package should work for you:
https://pkgs.alpinelinux.org/contents?file=chage&path=&name=&branch=edge&repo=community&arch=x86_64

Tf Serving - Docker from source or build from git?

Struggling to understand the workflow here for tf serving.
Official docs say to “docker pull tensorflow/serving”. But they also say to “git clone https://github.com/tensorflow/serving.git”
Which one should I use? I assume the git version is so I can build my own custom serving image?
When I pull the official image from docker and run the container, why can’t I access the root? Is it because I haven’t “built it” properly yet?
If you have added some custom code, then clone first and then build image.
If you want to deploy image directly, pull image and run.
BTW, what do you mean by "access the root"? AFAIC, root is the default user in a container.
I think that is a good observation.
Only place where I feel cloning Git hub repository using "https://github.com/tensorflow/serving.git" is required if you want to run the examples like 'half_plus_two', 'half_plus_three' or if you want to run the Examples mentioned in the link,
https://github.com/tensorflow/serving/tree/master/tensorflow_serving/example.
Except that, as far as I know, pulling the Docker Image should do everything needed.
Even building the Custom Docker Image using our Custom Model doesn't need us to clone the Git hub repo.
Code for building Custom Docker Image is shown below:
sudo docker run -d --name sb tensorflow/serving
sudo docker cp /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export sb:/models/Premade_Estimator_Export
sudo docker commit --change "ENV MODEL_NAME Premade_Estimator_Export" sb iris_container
sudo docker kill sb
sudo docker pull tensorflow/serving
sudo docker run -p 8501:8501 --mount type=bind,source=/usr/local/google/home/abc/Jupyter_Notebooks/TF_Serving/Premade_Estimator_Export,target=/models/Premade_Estimator_Export -e MODEL_NAME=Premade_Estimator_Export -t tensorflow/serving &
saved_model_cli show --dir /usr/local/google/home/abc/Jupyter_Notebooks/Premade_Estimator_Export/1556272508 --all
curl http://localhost:8501/v1/models/Premade_Estimator_Export #To get the status of the model
Regarding access to Root, if I understand it correctly, you don't want to run the docker commands using Sudo at the start for each command. Please follow the below mentioned command to get access to Root.
i. Add docker group if it does not already exist
ii. Add the connected user $USER to the docker group. Below are the commands to be run in the Terminal:
sudo groupadd docker
sudo usermod -aG docker $USER
iii. Reboot your PC and you should be able to execute Docker commands without sudo.

Run ServiceStack Console as Daemon on DigitalOcean

All,
I have successfully installed my ServiceStack console app on my DigitalOcean droplet and can run it from the command line using mono. When I do this, my app is accessible using Postman from my laptop.
I have also tried to use Upstart to run my app as a daemon. I can see from the logging that it successfully launches when I reboot, but unless I am logged in as root and have started my console app from the command line, I can't access the console app from the outside when running as the daemon. I have tried this with ufw enabled (configured to allow the port I am using) and disabled and it makes no difference.
I am reasonably certain this is a permissions issue in my upstart config file for my console app, but since I am brand new to linux, I am unclear as to my next step to get this console app available as a daemon.
Any and all help is greatly appreciated...
Bruce
# ServiceStack GeoAPIConsole Application
# description “GeoAPIConsole”
# author “Bruce Parr”
setuid root
# start on started rc
start on started networking
stop on stopping rc
respawn
exec start-stop-daemon --start --exec /usr/bin/mono /var/console/GeoAPIConsole.exe
This worked. I added a user geoapiconsole and added the -S and -c switches, then I followed with initctrl start GeoAPIConsole
# ServiceStack Example Application
description "ServiceStack Example"
author "ServiceStack"
start on started rc
stop on stopping rc
respawn
exec start-stop-daemon -S -c geoapiconsole --exec /usr/bin/mono /var/console/GeoAPIConsole.exe

Installing solr and indexing mysql

Can anyone help me with Installation of solr and configuring it to mysql table.I Have tried almost all tutorials , i tried with Jetty , also tomcat.Still getting errors like Data Handler not defined or could not find solr.It's been a week , i am trying all day
In order to get solr running, (assuming that you've downloaded solr and extract it to a location), just navigate to the jetty folder.
Under that there should be a start.jar.
Just type in java -jar start.jar - this should start Solr under jetty. As simple as that. For all my development purposes, I use this. I wouldn't worry about Tomcat unless the app is ready to be deployed to some server.
In order to get your SOLR instance to pull data from mysql, you need the DataImportHandler configured. This documentation describes it well.
EDIT:
A google search for "solr mysql import" lead me here. It is exactly what you're after, I suppose.
I also had the same issue and it is not easy to find simple tutorial for this. Anyway I found following tutorial and it was useful for me.
http://lasithtechavenue.blogspot.com/2013/11/crawling-mysql-database-with-apache-solr.html
Thanks
Hi Please take a look here.
https://github.com/vikash32/indexing-mysql-table-into-solr
i have tried to make it less messy.
Step1:
Login to Linux and go to root folder opt ie cd /opt/
Step2:
Dowload Solr-6.6.2 from the solr link and use the below command to download solr in linux
Sudo wget http://www-us.apache.org/dist/lucene/solr/6.6.2/solr-6.6.2.tgz
Step3:
Extract the service installation file
Sudo tar xzf solr-6.6.2.tgz solr-6.6.2/bin/install_solr_service.sh --strip-components=2
Step4:
Install solr as a service using the script
sudo bash ./install_solr_service.sh solr-6.6.2.tgz
Step5:To check solr server status
sudo service solr status
Step6:To Start Solr in Cloud mode in RHEL
Go to root directory ie cd /opt/
Then go to solr directory cd /solr
Opt/solr > sudo ./bin/solr start -c -force -s server/solr -p 8983 -z zk1:2181,zk2:2181,zk3:2181
Zk1 is the hostname or ipaddress
Step7: To Create Core on solr
Go to solr directory ie cd /opt/solr
Opt/solr > sudo ./bin/sor/create -c -p 8983,7574 -s 2 -rf 2
-s stands for no of shards
-rf stands for no of replica

Amazon EC2: How install glassfish in EC2?

i'm trying to deploy my JSF site in EC2 instances, i'm new with cloud computing.
How do i install the GassFish 3 OpenSource in my EC2 instance ?
Update:
To download use 'curl' command :
curl http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin > java-rpm.bin
or using wget:
wget http://www.java.net/download/jdk6/6u27/promoted/b03/binaries/jdk-6u27-ea-bin-b03-linux-i586-27_may_2011-rpm.bin
Here is what you need to do:
Get an AMI instance launched. Follow this tutorial to install. (Unfortunately, Glassfish installation tutorials are given as YouTube video on their official website!) The Simplest is to start with an existing EBS backed instance. This is how I started.
Now, if you want to kill the instance, it's same as throwing machine out of window. If you want to reuse it later or probably want to make a blue print for many instances that you will be launching in future. You need to bundle it up and register as an image.
If you have EBS backed instance, creating an image out of it is easier than sending an email. All you need to do is to login to your AWS Web Console, select the instance that you wanted to create an AMI of, select Instance Actions > Create Image from menu. Done!
If you have instance storage based AMI. You need to bundle up, and store in your S3 bucket, and register the AMI using, ec2-api-tools and ec2-ami-tools. So, have them installed in your instance and create the image as very neatly explained here.
Now, as far as cost is concerned, refer this. As far as I understand (my clients pay, so I don't really know how much) your running instance is going to cost you some money, even if there is no activity. However, if you make an AMI and store in S3 or in a EBS volume, you will be paying for storage cost.
Hope this explains what you wanted.
First you need to install jdk and then set environment variable JAVA_HOME.
Then follow below commands (Applicable on Amazon Linux EC2 ):
Directory used here is : usr/server
wget http://download.oracle.com/glassfish/4.1.2/release/glassfish-4.1.2.zip
unzip glassfish-4.1.2.zip
mv glassfish4 ../server/
groupadd glassfish-group
useradd -s /bin/bash -g glassfish-group glassfish-user
cd usr/server
chown -Rf glassfish-user.glassfish-group glassfish4
ls -l | grep glassfish
cd glassfish4
cd glassfish/domains
cd glassfish/bin
pwd
cd /etc/init.d/
wget https://geekstarts.info/scripts/glassfish.sh
mv glassfish.sh glassfish
chmod 755 glassfish
ls -l | grep glassfish
cd ~ glassfish/
su vector-user
whoami
pwd
cd glassfish4/bin
ls -l
whoami
./asadmin
change-master-password --savemasterpassword // default is chageit
change-admin-password // default is blank
start-domain
enable-secure-admin
restart-domain
stop-domain