I have a MySQL instance hosted in Google Cloud SQL, and I have a container which uses this database, I tried to initialize the database schema from the docker file using the following command:
FROM anapsix/alpine-java
ADD ./mysql/init.sql /docker-entrypoint-initdb.d
init.sql
SET sql_mode = '';
CREATE DATABASE IF NOT EXISTS `locations_schema`;
USE `locations_schema`;
CREATE TABLE `locations` (
`id` int(11) NOT NULL,
`value` varchar(255) NOT NULL,
`label` varchar(255) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
But unfortunately this is not working, is there any way I can achieve the init of a DB?
I think that you are getting confused implementing your solution.
If you have a unique Google Cloud MySQL instance used by your containers why are you trying to initialize the database schema each time a container is created from the new custom image? Isn't better to initialize it just connecting to it once manually?
However docker-entrypoint-initdb.d is used to initialize local MySQL instances, not an instance running in a different machine. Notice that you will need at least to specify the address of the instance, a user and a password to let the container to connect to it. In order to do so there are different ways and you can find several guide, but I don't think that this is what you need to implement.
However If you are trying to init a MySQL instance running in the Docker container here you can find an awesome explanation how to do it, but I think you have to change the image you are starting from since I don't think that it contains MySQL.
You can use a script to do the initialize the DB without needing to pass the root password, provided MySQL is persisting data outside of the container and you specify a volume on the docker run command. You will need to specify environment variables specific to your environment. The script file needs to be added to the container via a Dockerfile. The Basic MySQL command line:
Set -e
Set -x
Then start the MySQL daemon
Mysql_pid=$!
You can add an echo for “..” to show until the MySQL admin has started, if you want.
mysql -e "GRANT ALL ON *.* TO root#'%' IDENTIFIED BY '' WITH GRANT OPTION"
Then shutdown your MySQL daemon
wait $mysql_pid -- wait for it to stop
The Docker documentation, https://docs.docker.com/samples/library/mysql/ offers more DB start options, including starting from an application in a docker container. You can also combine MySQL and docker commands for other database functions, like create schema, or creating multiple databases.
Related
I am using PDI 8.3 with repo database in another server.
In my expectation, if I do not define any log connections in the job properties, the job will not send any logs to the repo database.
However, when I run a job with kitchen.sh, it defines new database connection "live_logging_info" that points to "localhost:5432". Because PDI repo database is in another server, the job fails.
May I know how to define the default DB log connection? Thank you.
Under PDI 8.3 there should be a folder called simple-jndi. Within that folder there should be a file called jdbc.properties. In that file near the bottom there are settings for live_logging_info. By default it points to localhost:5432 but you can set it to any location. Or it can be another type of database (MySQL,MSSQL, etc).
The settings that are available by default are:
live_logging_info/type=javax.sql.DataSource
live_logging_info/driver=org.postgresql.Driver
live_logging_info/url=jdbc:postgresql://localhost:5432/hibernate?searchpath=pentaho_dilogs
live_logging_info/user=hibuser
live_logging_info/password=password
I am using the below command to create data proc cluster:
gcloud dataproc clusters create informetis-dev
--initialization-actions “gs://dataproc-initialization-actions/jupyter/jupyter.sh,gs://dataproc-initialization-actions/cloud-sql-proxy/cloud-sql-proxy.sh,gs://dataproc-initialization-actions/hue/hue.sh,gs://dataproc-initialization-actions/ipython-notebook/ipython.sh,gs://dataproc-initialization-actions/tez/tez.sh,gs://dataproc-initialization-actions/oozie/oozie.sh,gs://dataproc-initialization-actions/zeppelin/zeppelin.sh,gs://dataproc-initialization-actions/user-environment/user-environment.sh,gs://dataproc-initialization-actions/list-consistency-cache/shared-list-consistency-cache.sh,gs://dataproc-initialization-actions/kafka/kafka.sh,gs://dataproc-initialization-actions/ganglia/ganglia.sh,gs://dataproc-initialization-actions/flink/flink.sh”
--image-version 1.1 --master-boot-disk-size 100GB --master-machine-type n1-standard-1 --metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
--num-preemptible-workers 2 --num-workers 2 --preemptible-worker-boot-disk-size 1TB --properties hive:hive.metastore.warehouse.dir=gs://informetis-dev/hive-warehouse
--worker-machine-type n1-standard-2 --zone asia-east1-b --bucket info-dev
But Dataproc failed to create cluster with following errors in failure file:
cat
+ mysql -u hive -phive-password -e '' ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
+ mysql -e 'CREATE USER '\''hive'\'' IDENTIFIED BY '\''hive-password'\'';' ERROR 2003 (HY000): Can't connect to MySQL
server on 'localhost' (111)
Does anyone have any idea behind this failure ?
It looks like you're missing the --scopes sql-admin flag as described in the initialization action's documentation, which will prevent the CloudSQL proxy from being able to authorize its tunnel into your CloudSQL instance.
Additionally, aside from just the scopes, you need to make sure the default Compute Engine service account has the right project-level permissions in whichever project holds your CloudSQL instance. Normally the default service account is a project editor in the GCE project, so that should be sufficient when combined with the sql-admin scopes to access a CloudSQL instance in the same project, but if you're accessing a CloudSQL instance in a separate project, you'll also have to add that service account as a project editor in the project which owns the CloudSQL instance.
You can find the email address of your default compute service account under the IAM page for your project deploying Dataproc clusters, with the name "Compute Engine default service account"; it should look something like <number>#project.gserviceaccount.com`.
I am assuming that you already created the Cloud SQL instance with something like this, correct?
gcloud sql instances create g-test-1022 \
--tier db-n1-standard-1 \
--activation-policy=ALWAYS
If so, then it looks like the error is in how the argument for the metadata is formatted. You have this:
--metadata "hive-metastore-instance=g-test-1022:asia-east1:db_instance”
Unfortuinately, the zone looks to be incomplete (asia-east1 instead of asia-east1-b).
Additionally, with running that many initializayion actions, you'll want to provide a pretty generous initialization action timeout so the cluster does not assume something has failed while your actions take awhile to install. You can do that by specifying:
--initialization-action-timeout 30m
That will allow the cluster to give the initialization actions 30 minutes to bootstrap.
By the time you reported, it was detected an issue with cloud sql proxy initialization action. It is most probably that such issue affected you.
Nowadays, it shouldn't be an issue.
I am trying to install and setup MobileFirst. SO i was able to install WAS ND, create a server, Install Mobilefirst Server, Install database, Create database and now i am trying to create a runtime with the configuration tool.
Below is the Database screenshot which proves that the database exist
This is a screenshot of the creating admin configuration and also it show that the port number is ok
This is a screen shot of the database additional extra setting which is checking for the database WLADM70 which i have given, validation happened without problem but its always in the checking for the database state.
I want to run a command on Zabbix agents:
Some simple unix commands, to obtain our reporting data.
When there is some processing required on the agent side.
There seem to be a variety approaches being talked about. So how to execute such commands on a Zabbix agent?
Run commands from the server directly from a new item.
First, set: EnableRemoteCommands=1 in the agent conf file (for all of your agents). To enable this feature.
Create a new item. A field on the "new item" page says 'key'. Enter:
system.run[command]
As the 'key' string. Where command is the command you want to be downloaded and run on the agent. Here is an example:
system.run[sysctl dev.cpu.0.temperature | cut -d ' ' -f 2 | tr -d C]
Perhaps you need to run something substantially more complex that is too long to fit in there? Then you will need to make a custom script. Put your custom scripts on a local webserver, or somewhere on the web.
Then you might set the item's key to:
system.run[ command -v script && script || wget script_url -O /path/to/script && script]
To fetch and download the missing script to the agent the first time it's executed. However that is a rather crude hack. Not very elegant.
A better way is to go to "Administration" --> "Scripts" in the menu. From there, you can create a new script to use in an item which may be configured to run on any of your agents.
Make a special custom item to re-run your script periodically (like a cron job). The job of the special script item is to update the agent with a collection of your other needed custom scripts.
Of course you could just write all of your custom scripts directly into zabbix's MYSQL database. And it is very tempting to do that. But be aware that then they'd be lost and vulnerable if your zabbix database ever gets fried or corrupted / lost. Zabbix databases always have a habit of growing large, unwieldy and out-of-control. So don't do that. Storing them separately somewhere else and under version control (git or subversion).
Once that's all sorted, we can finally go ahead and create further custom items to run your custom scripts. Again using:
system.run[script]
as the item's key just as before. Where 'script' is the command (plus any arguments), to execute your custom script locally on the agent.
Define the user parameter at the client (where zabbix agent is
located) at /etc/zabbix/zabbix_agentd.conf
The key should be
unique. I am using lsof as an example: UserParameter=open_file,lsof | wc -l
Restart the agent: service zabbix-agent restart
Test if the key is working using zabbix_get utility. To do that from the zabbix server, invoke the following: /usr/local/bin/zabbix_get -s <HOST/IP of the zabbix agent> -k open_file (It should return a number in this case)
Create an item with the key at the zabbix server at the template
level (the return type should be correctly defined, otherwise zabbix
will not accept it):
Type: Zabbix Agent (Active)
Key: open_file
Type of Information: Numeric (unsigned)
Data Type: decimal
You may create a graph using the item to monitor the value at
regular interval.
Here is the official documentation.
Today I tried to install Rails 3.2.1 with Postgresql 8.4 on my local Ubuntu 10.04 VM. I basically followed the instructions from http://www.mcbsys.com/techblog/2011/10/set-up-postgresql-for-rails-3-1/ . The only thing I had to do differently was change a line in pg_hba.conf to "local postgres myapp trust" since the default user postgres didn't have a password and I had to create a postgresql user called "login_name" that matched my Linux system login user name (let's call it "login_name" for the sake of this example), otherwise I could not use rake db:create to create the db.
My database.yml file looked like this:
development:
adapter: postgresql
encoding: unicode
database: development_db
pool: 5
user: login_name
password: some_random_password
My question is why did I have to create a user name that matched my system login name to get this to work and is there a way around this? I Googled the heck out of this and really couldn't find a satisfactory answer.
Take a look at your PostgreSQL configuration, you'll find two related files there :
pg_hba.conf
pg_ident.conf
pg_hba controls the mechanism how to handle auth request, typically with an entry like this :
local all all ident
ident means, all local/unix-socket connections where done with the system user's identity.
If you have a "non-system" user, you need to create a map in pg_ident.conf for them :
# MAP OS-Name DB-Name
myusers user user
myusers user dbuser
Alter pg_hba.conf like this:
local all all ident map=myusers
Authentication is very well documented in the config files and online in the PostgreSQL Administration Guide
The name you specify in user: is the name used to log in; you do not have to have that be the system user if you don't want. The thing is, when the database is initialized its owner is (typically) set to the username who did the db_init, which can vary. So the username of the host and the username in the database tend to be loosely coupled.
From a security standpoint, a best practices is to run init_db as the postgres user, and then explicitly GRANT permissions, or change the owner of the template1 database (and password) to be the one you want your applications to use. Databases (more accurately schemas) will use the settings of template1 when they are created.
Practically speaking, on web-accessible servers, I like having a (system) username with limited access (little or no sudo) that has the same name as the database user. Set a name and use it everywhere. This makes it easier to copy databases, configure, and so on. Otherwise you end up getting "role foo not found" errors.
Rails makes it super easy to get going in development and that's good. It should not make it super-easy to create an insecure production environment :-)