Namenode not start after restart pc (hadoop 2.7.3) - apache

i've configured hadoop 2.7.3 on ubuntu 16.04 and run all (word count and other mapReduce run all).
after restart pc, i launch start-dfs, but the namenode not started. other guide says remove temporay directory, but i don't have it.
that are my files:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>

Step 1: Create the folders like below:
-- sudo mkdir /dfs
-- sudo chown username:username /dfs/*
-- sudo chmod 755 /dfs/
-- cd /dfs
If the machine is name node then:
-- mkdir nn
if data node:
-- mkdir data
Step 2:
add below properties in hdfs-site.xml
<configuration>
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/dfs/nn</value>
</property>
</configuration>
<configuration>
<property>
<name>dfs.datanode.data.dir</name>
<value>/dfs/data</value>
</property>
</configuration>
</configuration>
step 3:
format the namenode: hadoop namenode -format
step 4:
start all services

Related

Cache node module library with plugin frontend-maven-plugin

I have a react project that I configured on Gitlab CICD Pipeline (with docker).
In my pipeline, I have a step for create the package that use frontend-maven-plugin plugin (Packaging)
Each time I run the pipeline all artifact are download. I want to use a cache in the pipeline but the configuration below doesn't work. I think that frontend-maven-plugin use a different location for cache.
How can I configure it ?
The pipeline Gitlab :
stages:
- build
- analyse
- package
- deploiement
cache:
paths:
- project_front/projectdir/node_modules/
variables:
MAVEN_CLI_OPTS: "-s ${MAVEN_PROJECT_SETTINGS} --batch-mode"
MAVEN_OPTS: "-Dmaven.repo.local=${MAVEN_PROJECT_REPO}"
#
# Build de l'application
Build:
image: maven:3.5.2-jdk-8
stage: build
tags: [ 'project' ]
script:
# Récupération de la version du projet maven
- mvn $MAVEN_CLI_OPTS -U clean install -D maven.test.skip=true
#
# Step d'analyse SONAR
Qualimétrie:
image: maven:3.5.2-jdk-8
stage: analyse
tags: [ 'project' ]
script:
- mvn $MAVEN_CLI_OPTS -P sonarqubeproject,jacoco,nexus -U -D sonar.qualitygate.wait=true -D sonar.branch.name=$CI_COMMIT_BRANCH verify sonar:sonar
artifacts:
when: always
reports:
junit:
- "*/target/surefire-reports/TEST-*.xml"
#
# Génération du package de livraison
Packaging:
image: maven:3.5.2-jdk-8
stage: package
tags: [ 'project' ]
script:
# Packaging
- mvn $MAVEN_CLI_OPTS -P package_livraison -U package -Dmaven.test.skip=true -D HTTP_PROXY=${PROXY_SRV} -D HTTPS_PROXY=${PROXY_SRV}
artifacts:
paths:
- '**/target/*.tar.gz'
#
# Step de déploiement Ansible
Deploiement INT:
stage: deploiement
tags: [ 'project' ]
image: ansible/centos7-ansible
only:
- master
- develop
- feature-gitlab-cicd
when: manual
dependencies:
- Packaging
script:
#Installation de XMLLINT pour récupérer la version dans le POM
- yum -y install libxml2
# Récupération de la version du projet maven
- VERSION_PROJECT=`xmllint --xpath '/*[local-name()="project"]/*[local-name()="version"]/text()' pom.xml`
# Execution du ssh-agent
- eval $(ssh-agent -s)
# Récupération de la clé privée de l'hote et envoi dans le container
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
# Création des répertoires
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
# Recherche de l'hote
- ssh-keyscan myhost >> ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
# Envoi du package à la machine PROJECT
- cp project_deploiement/target/*.tar.gz /home/gitlab-runner/depot/
#Lancement du playbook
- ansible-playbook -v -i project_ansible/project-hosts project_ansible/ansible/project-install-playbook-int.yml -f 10 --extra-vars "project_version=${VERSION_PROJECT} -vvvv"
profile :
<profile>
<id>package_livraison</id>
<build>
<plugins>
<plugin>
<groupId>com.github.eirslett</groupId>
<artifactId>frontend-maven-plugin</artifactId>
<version>${frontend-maven-plugin.version}</version>
<configuration>
<nodeVersion>${node.version}</nodeVersion>
<yarnVersion>${yarn.version}</yarnVersion>
<workingDirectory>${frontend-src-dir}</workingDirectory>
<installDirectory>${project.build.directory}</installDirectory>
</configuration>
<executions>
<execution>
<id>install-frontend-tools</id>
<goals>
<goal>install-node-and-yarn</goal>
</goals>
</execution>
<execution>
<id>yarn-install</id>
<goals>
<goal>yarn</goal>
</goals>
<configuration>
<!-- Le proxy maven n'est pas utilisé, on le redéfinit-->
<yarnInheritsProxyConfigFromMaven>true</yarnInheritsProxyConfigFromMaven>
<arguments>install</arguments>
</configuration>
</execution>
<execution>
<id>build-frontend</id>
<goals>
<goal>yarn</goal>
</goals>
<phase>prepare-package</phase>
<configuration>
<arguments>build</arguments>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
Properties :
<properties>
<frontend-src-dir>./projectdir</frontend-src-dir>
<node.version>v10.15.3</node.version>
<yarn.version>v1.15.2</yarn.version>
<frontend-maven-plugin.version>1.7.6</frontend-maven-plugin.version>
</properties>
The directory Structure

install MySql connector on Ubuntu Server, Mono

Connector is installed in gac:
nn#sv3:~/mysqlconnector.net/v4.5$ gacutil -l |grep MySql
MySql.Data, Version=6.9.7.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d
I have added MySql.Data to machine config:
nn#sv3:/$ cat /etc/mono/4.5/machine.config |grep MySql
<add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient"
type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.9.7.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" />
When I try to comile with ref to MySql.Data I get:
nn#sv3:/$ sudo ./build.sh
error CS0006: Metadata file `MySql.Data.dll' could not be found
I have tried to changed ref with full path and both MySql.Data and MySql.Data.dll
Here is the compile command:
dmcs -target:library -out:SQLtest.dll -pkg:dotnet -lib:/usr/lib/mono/2.0 -lib:/usr/lib/mono/4.5 -r:MySql.Data -r:*.cs
Have also tried to use v4.0 with same result.
What am I missing to get this working?

Deploying site with Apache Maven SSH Wagon

Introduction
I want that my Maven project(s) documentation (site) gets deployed at the docs folder of my website.
I'm using Apache Maven Wagon SSH at the moment to get the job done. The SSH connection works great, It pushes a zip file to the host and unpack's it.
I'm using fake names in these examples; provider.com, company.com and company
Problem
But the unpacking is placed in the wrong folder...
Instead it steps through the dirs; /domains/company.com/htdocs/docs/ and create these folders to put the documentation in: /${project.slug}/${project.artifactId}/${project.version}
It makes a new directory from the root: r.com/domains/company.com/htdocs/docs/${project.slug}/${project.artifactId}/${project.version}
Question
How can I manage the SSH/SCP to connect scp:ssh.provider.com, than walk through /domains/company.com/htdocs/docs/ directories and finally create the directories /${project.slug}/${project.artifactId}/${project.version} with the documentation inside?
Apendix. Files
Log
: scp:ssh.provider.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/docs/ - Session: Opened
[INFO] Pushing C:\Users\nberl\Code\company\company-maven-parent\target\site
[INFO] >>> to scp:ssh.provider.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/./
Executing command: mkdir -p "r.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/./"
Executing command: mkdir -p "r.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/."
Executing command: scp -t "r.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/./wagon4192311672559342478.zip"
Uploading: ./wagon4192311672559342478.zip to scp:ssh.provider.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/
##########
Transfer finished. 40802 bytes copied in 0.064 seconds
Executing command: cd "r.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/./"; unzip -q -o "wagon4192311672559342478.zip"; rm -f "wagon4192311672559342478.zip"
Executing command: chmod -Rf g+w,a+rX r.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/
scp:ssh.provider.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/ - Session: Disconnecting
scp:ssh.provider.com/domains/company.com/htdocs/docs/company/company-parent/1-SNAPSHOT/ - Session: Disconnected
POM configuration
The distribution management of the site:
<distributionManagement>
<site>
<id>company-docs</id>
<name>Company Docs</name>
<url>scp:ssh.provider.com/domains/company.com/htdocs/docs/${project.slug}/${project.artifactId}/${project.version}/docs</url>
</site>
</distributionManagement>
The site deploy profile:
<profile>
<id>deploy-site</id>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-site-plugin</artifactId>
<dependencies>
<dependency>
<groupId>org.apache.maven.wagon</groupId>
<artifactId>wagon-ssh</artifactId>
<version>2.7</version>
</dependency>
</dependencies>
<executions>
<execution>
<id>site-deploy</id>
<phase>site</phase>
<goals>
<goal>deploy</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
Well after some research and a lot of just trying I found finally the solution!
In the distribution management sector you have to describe the absolute path from the absolute root of the server to the url.
Also you should use scp:// and end the domain with : (see the example below).
<distributionManagement>
<site>
<id>company-docs</id>
<name>Company Docs</name>
<url>scp://ssh.provider.com:/home/user/domains/company.com/htdocs/docs/${project.slug}/${project.artifactId}/${project.version}/docs</url>
</site>
</distributionManagement>
Now everything works like a charm!

Maven Glassfish Plugin: Deploy application as exploded directory/folder

I need my JavaEE-application to be deployed on Glassfish as a directory, not a packaged WAR file.
Is it possible to deploy a directory to Glassfish with the Maven Glassfish Plugin?
With the admin console, it's possible. But i want to be also able to do it on the command line.
The following configuration works for me (note that the artifact element points to a directory):
<plugin>
<groupId>org.glassfish.maven.plugin</groupId>
<artifactId>maven-glassfish-plugin</artifactId>
<version>2.2-SNAPSHOT</version>
<configuration>
<glassfishDirectory>${glassfish.home}</glassfishDirectory>
<user>${domain.username}</user>
<passwordFile>${glassfish.home}/domains/${project.artifactId}/master-password</passwordFile>
<autoCreate>true</autoCreate>
<debug>true</debug>
<echo>true</echo>
<skip>${test.int.skip}</skip>
<domain>
<name>${project.artifactId}</name>
<httpPort>8080</httpPort>
<adminPort>4848</adminPort>
</domain>
<components>
<component>
<name>${project.artifactId}</name>
<artifact>${project.build.directory}/${project.build.finalName}</artifact>
</component>
</components>
</configuration>
</plugin>
The resulting asadmin command is:
asadmin --host localhost --port 4848 --user admin --passwordfile /home/pascal/opt
/glassfishv3/glassfish/domains/maven-glassfish-testcase/master-password --interac
tive=false --echo=true --terse=true deploy --name maven-glassfish-testcase --forc
e=false --precompilejsp=false --verify=false --enabled=true --generatermistubs=fa
lse --availabilityenabled=false --keepreposdir=false --keepfailedstubs=false --lo
gReportedErrors=true --upload=false --help=false /home/pascal/Projects/stackoverf
low/maven-glassfish-testcase/target/maven-glassfish-testcase
I did not get it to work with the maven plugin, but it is possible to deploy to glassfish from the command line using the asadmin command from the glassfish/bin directory:
asadmin deploy --contextroot context_root path_to_ear_or_directory

VMWare Server: Best way to backup images

What is the best way to backup VMWare Servers (1.0.x)?
The virtual machines in question are our development environment, and run isololated from the main network (so you can't just copy data from virtual to real servers).
The image files are normally in use and locked when the server is running, so it is difficult to back these up with the machines running.
Currently: I manually pause the servers when I leave and have a scheduled task that runs at midnight to robocopy the images to a remote NAS.
Is there a better way to do this, ideally without having to remember to pause the virtual machines?
VMWare server includes the command line tool "vmware-cmd", which can be used to perform virtually any operation that can be performed through the console.
In this case you would simply add a "vmware-cmd susepend" to your script before starting your backup, and a "vmware-cmd start" after the backup is completed.
We use vmware-server as part of our build system to provide a known environment to run automated DB upgrades against, so we end up rolling back state as part of each build (driven by CruiseControl), and have found this interface to be rock solid.
Usage: /usr/bin/vmware-cmd <options> <vm-cfg-path> <vm-action> <arguments>
/usr/bin/vmware-cmd -s <options> <server-action> <arguments>
Options:
Connection Options:
-H <host> specifies an alternative host (if set, -U and -P must also be set)
-O <port> specifies an alternative port
-U <username> specifies a user
-P <password> specifies a password
General Options:
-h More detailed help.
-q Quiet. Minimal output
-v Verbose.
Server Operations:
/usr/bin/vmware-cmd -l
/usr/bin/vmware-cmd -s register <config_file_path>
/usr/bin/vmware-cmd -s unregister <config_file_path>
/usr/bin/vmware-cmd -s getresource <variable>
/usr/bin/vmware-cmd -s setresource <variable> <value>
VM Operations:
/usr/bin/vmware-cmd <cfg> getconnectedusers
/usr/bin/vmware-cmd <cfg> getstate
/usr/bin/vmware-cmd <cfg> start <powerop_mode>
/usr/bin/vmware-cmd <cfg> stop <powerop_mode>
/usr/bin/vmware-cmd <cfg> reset <powerop_mode>
/usr/bin/vmware-cmd <cfg> suspend <powerop_mode>
/usr/bin/vmware-cmd <cfg> setconfig <variable> <value>
/usr/bin/vmware-cmd <cfg> getconfig <variable>
/usr/bin/vmware-cmd <cfg> setguestinfo <variable> <value>
/usr/bin/vmware-cmd <cfg> getguestinfo <variable>
/usr/bin/vmware-cmd <cfg> getid
/usr/bin/vmware-cmd <cfg> getpid
/usr/bin/vmware-cmd <cfg> getproductinfo <prodinfo>
/usr/bin/vmware-cmd <cfg> connectdevice <device_name>
/usr/bin/vmware-cmd <cfg> disconnectdevice <device_name>
/usr/bin/vmware-cmd <cfg> getconfigfile
/usr/bin/vmware-cmd <cfg> getheartbeat
/usr/bin/vmware-cmd <cfg> getuptime
/usr/bin/vmware-cmd <cfg> getremoteconnections
/usr/bin/vmware-cmd <cfg> gettoolslastactive
/usr/bin/vmware-cmd <cfg> getresource <variable>
/usr/bin/vmware-cmd <cfg> setresource <variable> <value>
/usr/bin/vmware-cmd <cfg> setrunasuser <username> <password>
/usr/bin/vmware-cmd <cfg> getrunasuser
/usr/bin/vmware-cmd <cfg> getcapabilities
/usr/bin/vmware-cmd <cfg> addredo <disk_device_name>
/usr/bin/vmware-cmd <cfg> commit <disk_device_name> <level> <freeze> <wait>
/usr/bin/vmware-cmd <cfg> answer
Worth looking at rsync? If only part of a large image file is changing then rsync might be the fastest way to copy any changes.
I found an easy to follow guide for backing up VM's in vmware server 2 here: Backup VMware Server 2
If I recall correctly, VMWare Server has a scripting interface, available via Perl or COM. You might be able to use that to automatically pause the VMs before running the backup.
If your backup software was shadow-copy aware, that might work, too.
There is a tool called (ahem) Hobocopy which will copy locked VM files. I would recommend taking a snapshot of the VM and then backing up the VMDK. Then merge the snapshot after the copy is complete.