BIG data - Setup Hadoop, HDFS, HBASE, hive - Installing Java and Hadoop - Part2

My wife takes all my money, so if this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug

Keep this in mind

Host 1 - SuperNinja1 - 172.28.200.161 SuperMicro (The Master and running the NameNode. It also has H-Base and Hive installed)
Host 2 - SuperNinja2 - 172.28.200.163 SuperMicro (DataNode and a NodeManager)
Host 3 - SuperNinja3 - 172.28.200.165 SuperMicro (DataNode and a NodeManager, This node runs the Postgres instance for Hive)
Host 4 - SuperNinja4 - 172.28.200.150 HP Desktop (DataNode and a NodeManager)
Host 5 - SuperNinja5 - 172.28.200.153 HP Desktop (DataNode and a NodeManger)

For Hadoop and all the other stuff to work, you need java, seeing that I'm building on SLES11 SP3, I downloaded the latest Java RPM and installed it with rpm -ivh
SuperNinja5:/opt/temp # rpm -ivh jdk-2000\:1.7.0-fcs.x86_64.rpm
Preparing...                ########################################### [100%]
   1:jdk                    ########################################### [100%]
Unpacking JAR files...
    rt.jar...
    jsse.jar...
    charsets.jar...
    tools.jar...
    localedata.jar...
SuperNinja1:/opt/temp #
SuperNinja2:/opt/temp # which java
/usr/bin/java
SuperNinja2:/opt/temp # ls -ltr /usr/bin/java
lrwxrwxrwx 1 root root 26 May 28 10:07 /usr/bin/java -> /usr/java/default/bin/java
SuperNinja5:/opt/temp # ls -ltr /usr/java/latest
lrwxrwxrwx 1 root root 18 May 28 10:07 /usr/java/latest -> /usr/java/jdk1.7.0
SuperNinja2:/opt/temp #

I created 2 groups and 2 users, one for Hadoop and one for Hbase. The Hadoop user is called hduser and it belongs to the group hadoop, the other being hbuser, which is the H-Base user and belongs to the hbase group.

Let's start with Hadoop

Download the latest Hadoop from Apache's website
http://hadoop.apache.org/#Download+Hadoop
Place the downloaded file in /opt/temp

Make a directory for /opt/app, this is where we will place the Hadoop binaries. gunzip and untar the file, note the -C /opt/app, this means the tarred file contents will be placed in /opt/app
SuperNinja1:/opt/temp # mkdir -p /opt/app
SuperNinja1:/opt/temp # ls -ltr
total 135832
-rw-r--r-- 1 root root       194 May 14 15:29 ETH_MAC_ADDRESSES
-rw-r--r-- 1 root root 138943699 May 15 11:25 hadoop-2.4.0.tar.gz
SuperNinja1:/opt/temp # gunzip hadoop-2.4.0.tar.gz 
SuperNinja1:/opt/temp # tar -xvf hadoop-2.4.0.tar -C /opt/app
hadoop-2.4.0/
hadoop-2.4.0/bin/
hadoop-2.4.0/bin/mapred
hadoop-2.4.0/bin/hadoop
hadoop-2.4.0/bin/mapred.cmd
hadoop-2.4.0/bin/rcc
hadoop-2.4.0/bin/container-executor
hadoop-2.4.0/bin/hdfs
hadoop-2.4.0/bin/test-container-executor
Snip....Snip
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/icon_error_sml.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/banner.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/bg.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/icon_info_sml.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/expanded.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/newwindow.png
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/maven-logo-2.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/h3.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/breadcrumbs.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/h5.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/external.png
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logos/
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logos/build-by-maven-white.png
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logos/build-by-maven-black.png
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logos/maven-feather.png
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/icon_warning_sml.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/collapsed.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logo_maven.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/logo_apache.jpg
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/icon_success_sml.gif
hadoop-2.4.0/share/doc/hadoop/hadoop-streaming/images/apache-maven-project-2.png
SuperNinja1:/opt/temp #

Lets see what happened, change directory to /opt/app
SuperNinja1:/opt/temp # cd /opt/app
SuperNinja1:/opt/app # ls
hadoop-2.4.0
SuperNinja1:/opt/app #

To make it more friendly, I renamed the hadoop-2.4.0 to hadoop
SuperNinja5:/opt/app # mv hadoop-2.4.0 hadoop
SuperNinja1:/opt/app # ls -ltr
total 8
drwxr-xr-x 9 67974 users 4096 Mar 31 11:15 hadoop
SuperNinja1:/opt/app #

Next we need a user for hadoop, I created a user called hduser with group hadoop, also create the user's home directory and set the permissions
SuperNinja1:/opt/app # groupadd hadoop
SuperNinja1:/opt/app # useradd -g hadoop hduser
SuperNinja1:/opt/app # mkdir -p /home/hduser
SuperNinja1:/opt/app # chown -R hduser:hadoop /home/hduser

We then login using the newly created user and generate the user's ssh keys, with this user you must be able to log into ALL the servers without any password
SuperNinja1:/opt/app # su - hduser
hduser@SuperNinja1:~> ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
7e:ce:17:29:64:21:54:73:2c:fd:5c:64:96:b1:91:fc [MD5] hduser@SuperNinja1
The key's randomart image is:
+--[ RSA 2048]----+
|       ...oo. .+B|
|        . ooo  *+|
|         . o o o.|
|          o   o E|
|        So   .   |
|       .  . o    |
|        . .. .   |
|         +  .    |
|          o.     |
+--[MD5]----------+
hduser@SuperNinja1:~> ls -la .ssh
total 16
drwx------ 2 hduser hadoop 4096 May 15 11:29 .
drwxr-xr-x 3 hduser hadoop 4096 May 15 11:29 ..
-rw------- 1 hduser hadoop 1679 May 15 11:29 id_rsa
-rw-r--r-- 1 hduser hadoop  400 May 15 11:29 id_rsa.pub
hduser@SuperNinja1:~> echo $HOME
/home/hduser
hduser@SuperNinja1:~> cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
hduser@SuperNinja1:~> cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
ls -la .ssh
total 20
drwx------ 2 hduser hadoop 4096 May 15 11:30 .
drwxr-xr-x 3 hduser hadoop 4096 May 15 11:29 ..
-rw-r--r-- 1 hduser hadoop  400 May 15 11:30 authorized_keys
-rw------- 1 hduser hadoop 1679 May 15 11:29 id_rsa
-rw-r--r-- 1 hduser hadoop  400 May 15 11:29 id_rsa.pub
hduser@SuperNinja1:~> ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is 06:a7:bc:61:a0:de:14:04:23:d9:2a:84:75:37:23:f4 [MD5].
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.

hduser@SuperNinja1:~> exit
logout
Connection to localhost closed.

hduser@SuperNinja1:~> exit
logout

Create the hduser on all the servers, using the procedure above. Then make a text file with all the servers authorized_keys in it and place this file containing all the servers authorized_keys on all the servers in /home/hduser/.ssh/authorized_keys. This ensure that the hduser can log into ALL servers with no password. Below is an example of what it looks like, yes I did change my keys for this printout below, so don't even try it...
SuperNinja1:~ # cd /home/hduser/.ssh/
SuperNinja1:/home/hduser/.ssh # cat authorized_keys
ssh-rsa jCfon0dWBqIffU9G3q+HVzYRs6FDNrov hduser@SuperNinja1
ssh-rsa n0fwO3pBo8bQc2bA9lvKEIHbTwmUWDcu hduser@SuperNinja2
ssh-rsa dwS0ltr6/H1VPaU1X/OS3/Jq83yxjAYT hduser@SuperNinja3
ssh-rsa u1HzxsOH8Leu07JQA3piUaB56B7eJNFz hduser@SuperNinja4
ssh-rsa pnbYOuKz093zZzSMt80AmijczuPctnaf hduser@SuperNinja5
SuperNinja1:/home/hduser/.ssh # 

Next step is to login as hduser and set some variables in the .bashrc file on all the servers. Set the following in the .bashrc file in the hduser's home directory - See below
SuperNinja1:/home/hduser/.ssh # cd /
SuperNinja1:/ # su - hduser
hduser@SuperNinja1:~> pwd
/home/hduser
hduser@SuperNinja1:~> cat .bashrc
#Set Hadoop-related environment variables
export HADOOP_HOME=/opt/app/hadoop
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HIVE_HOME=/opt/app/hive
export PATH=$HADOOP_HOME/bin:$HIVE_HOME/bin:$PATH

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/java/latest

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
# For jps
export PATH=$PATH:$JAVA_HOME/bin
hduser@SuperNinja1:~>

Logout and log in again with the hduser and see if the .bashrc file is loaded

hduser@SuperNinja1:~> exit
logout
SuperNinja1:/ # su - hduser
hduser@SuperNinja1:~> echo $HADOOP_HOME
/opt/app/hadoop
hduser@SuperNinja1:~> echo $HIVE_HOME
/opt/app/hive
hduser@SuperNinja1:~>

Yea! we can start configuring HADOOP, all changes must be made as the hduser
All the files needed for Hadoop is in /opt/app/hadoop/etc/hadoop

SuperNinja1:/ # su - hduser
hduser@SuperNinja1:~> cd /opt/app/hadoop/etc/hadoop/
hduser@SuperNinja1:/opt/app/hadoop/etc/hadoop> ls -ltr
total 132
-rw-r--r-- 1 hduser hadoop  2268 Mar 31 10:49 ssl-server.xml.example
-rw-r--r-- 1 hduser hadoop  2316 Mar 31 10:49 ssl-client.xml.example
-rw-r--r-- 1 hduser hadoop 11169 Mar 31 10:49 log4j.properties
-rw-r--r-- 1 hduser hadoop  9257 Mar 31 10:49 hadoop-policy.xml
-rw-r--r-- 1 hduser hadoop  2490 Mar 31 10:49 hadoop-metrics.properties
-rw-r--r-- 1 hduser hadoop  3589 Mar 31 10:49 hadoop-env.cmd
-rw-r--r-- 1 hduser hadoop  2178 Mar 31 10:49 yarn-env.cmd
-rw-r--r-- 1 hduser hadoop  4113 Mar 31 10:49 mapred-queues.xml.template
-rw-r--r-- 1 hduser hadoop  1383 Mar 31 10:49 mapred-env.sh
-rw-r--r-- 1 hduser hadoop   918 Mar 31 10:49 mapred-env.cmd
-rw-r--r-- 1 hduser hadoop   620 Mar 31 10:49 httpfs-site.xml
-rw-r--r-- 1 hduser hadoop    21 Mar 31 10:49 httpfs-signature.secret
-rw-r--r-- 1 hduser hadoop  1657 Mar 31 10:49 httpfs-log4j.properties
-rw-r--r-- 1 hduser hadoop  1449 Mar 31 10:49 httpfs-env.sh
-rw-r--r-- 1 hduser hadoop  1774 Mar 31 10:49 hadoop-metrics2.properties
-rw-r--r-- 1 hduser hadoop   318 Mar 31 10:49 container-executor.cfg
-rw-r--r-- 1 hduser hadoop  1335 Mar 31 10:49 configuration.xsl
-rw-r--r-- 1 hduser hadoop  3589 Mar 31 10:49 capacity-scheduler.xml
-rw-r--r-- 1 hduser hadoop   206 May 15 12:28 mapred-site.xml
-rw-r--r-- 1 hduser hadoop  3512 May 15 12:54 hadoop-env.sh
-rw-r--r-- 1 hduser hadoop  4878 May 16 11:06 yarn-env.sh
-rw-r--r-- 1 hduser hadoop   679 May 16 11:27 yarn-site.xml
-rw-r--r-- 1 hduser hadoop   655 May 22 14:40 derby.log
drwxr-xr-x 5 hduser hadoop  4096 May 22 14:40 metastore_db
-rw-r--r-- 1 hduser hadoop   334 May 26 07:42 core-site.xml
-rw-r--r-- 1 hduser hadoop    60 May 28 11:58 slaves
-rw-r--r-- 1 hduser hadoop   510 May 29 11:14 hdfs-site.xml
hduser@SuperNinja1:/opt/app/hadoop/etc/hadoop>

Determine which process is using swap

Reason 2 why you should give me some BitCoins... I'm poor?.... If this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug

My server recently ran out of swap space, causing slow calculations, slow response times etc

cd /opt/temp

vi a script called swap.sh, paste the following and save with ESC:wq!
#!/bin/bash
# Get current swap usage for all running processes
# This script will tell you the PID and the command line
# that stared the PID with how much of swap space is being used
# King Rat 08/02/2014
SUM=0
OVERALL=0
for DIR in `find /proc/ -maxdepth 1 -type d -regex "^/proc/[0-9]+"`
do
PID=`echo $DIR | cut -d / -f 3`
PROGNAME=`ps -p $PID -o comm --no-headers`
for SWAP in `grep Swap $DIR/smaps 2>/dev/null | awk '{ print $2 }'`
do
let SUM=$SUM+$SWAP
done
if (( $SUM > 0 )); then
echo "****************************"
echo "PID=$PID swapped $SUM in KB ($PROGNAME)"
echo "cmd line that activate the PID is below "
xargs -0 echo < /proc/$PID/cmdline
echo "*****************************************"
fi
let OVERALL=$OVERALL+$SUM
SUM=0
done
echo " "
echo "Overall swap used: $OVERALL in KB"
let OVERALL=$OVERALL/1024
echo "Overall swap used: $OVERALL in MB"
echo " "
echo "******** swapon -s *********"
swapon -s
echo " "
echo "********  free -m  *********" 
free -m

Make the script executable with chmod +x swap.sh Run the script with ./swap.sh, see output below
someserver:/opt/temp # ./swap.sh
****************************
PID=1 swapped 104 in KB (init)
cmd line that activate the PID is below
init [3] 
*****************************************
****************************
PID=708 swapped 16 in KB (oracle)
cmd line that activate the PID is below
ora_w006_DTE
*****************************************
****************************
PID=1504 swapped 28 in KB (cron)
cmd line that activate the PID is below
/usr/sbin/cron
*****************************************

Bla, Bla, Bla

****************************
PID=63545 swapped 12 in KB (oracle)
cmd line that activate the PID is below
oracle (LOCAL=NO)
*****************************************
****************************
PID=63568 swapped 12 in KB (oracle)
cmd line that activate the PID is below
ora_w007
*****************************************
****************************
PID=64038 swapped 16 in KB (oracle)
cmd line that activate the PID is below
ora_w00d
*****************************************
  
Overall swap used: 310456 in KB
Overall swap used: 303 in MB
  
******** swapon -s *********
Filename                Type        Size    Used    Priority
/dev/mapper/system-swap                 partition    26214392    4371104    -1
  
********  free -m  *********
             total       used       free     shared    buffers     cached
Mem:         96714      95321       1393          0         50      71053
-/+ buffers/cache:      24217      72497
Swap:        25599       4268      21331
someserver:/opt/temp #

Problem with ETH hardware not seen

Reason 1 why you should give me some BitCoins... uuhhh, mmmh, uuuhh, mmmh...... If this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug


ETH0 to ETH3 not recognized

NIC board was replaced on a HP DL380 server

Situation arise were the NIC board was replace and after the board was replaced, ETH0, ETH1, ETH2 and ETH3 were activated as ETH4, ETH5, ETH6 and ETH7. The problem is in /etc/udev/rules.d/70-persistent-net.rules where the new board was identified, but because the old board was still in the configuration, ETH0 to ETH3 were added as ETH4 to ETH7. To fix this, log into ILOM and note the MAC addresses of the NICs

Set the file the same, IE change the NAME="ethx" to the correct one and set the MAC addresses as per the ILOM printout. Comment the old board entries.

someserver:~ # cat /etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single line.

# PCI device 0x14e4:0x1657 (tg3)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="2c:76:8a:54:89:90", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" - Commented this

# PCI device 0x14e4:0x1657 (tg3)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="2c:76:8a:54:89:92", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" - Commented this

# PCI device 0x14e4:0x1657 (tg3)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="2c:76:8a:54:89:91", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" - Commented this

# PCI device 0x14e4:0x1657 (tg3)
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="2c:76:8a:54:89:93", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth3" - Commented this

# PCI device 0x14e4:0x1657 (tg3)
# Below is the new board settings, change the MAC address as per ILOM print, change 
the NAME="ethx" to correspond
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:16:2d:70:11:4a", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth2"

# PCI device 0x14e4:0x1657 (tg3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:16:2d:70:11:4b", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth3"

# PCI device 0x14e4:0x1657 (tg3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:16:2d:70:11:49", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"

# PCI device 0x14e4:0x1657 (tg3)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:16:2d:70:11:48", 
ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
someserver:~ #

Reboot the server and make sure that the server is pingable afterwards

What I do in my 'spare' time

I like this, reminds me of me, if that's possible... BitcoinS Pleaseeezzzee 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug
All by myself

Open port scanner for Solaris

Open port scanner for Solaris

Running out of ideas to convince you about giving me your BitCoins, but if this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug


ports.sh will report all open ports and will attempt to find the process linked to the open port.
The script will check the /etc/services file to check if there is an entry in the file for the open port.
  • Usage
    • Copy the code below and by using vi, create a file called ports.sh in /opt/temp'
    • Save the file ports.sh by using vi command :wq!
    • Change the permissions on ports.sh with chmod +x ports.sh
    • Execute ports.sh with ./ports.sh
     

# King Rat 2014/03/21

#

# This script will scan for open ports, tie it to the PID and checks if there is an entry in /etc/service

#!/bin/bash

echo " "

echo " "

echo "---------------------------------------------------------------------"

echo "Open ports scanner"

echo "---------------------------------------------------------------------"

echo -e "PID\tProcess Name and Port"

echo -e "PID\tProcess Name and Port" > ports

echo "_________________________________________________________"

echo "_________________________________________________________" >> ports

 for proc in `ptree -a | sort -n | awk '/ptree/ {next} {print $1};'`; do

 out=`pfiles $proc 2>/dev/null| egrep "port:"`

 if [ ! -z "$out" ]; then

 name=`ps -fo comm= -p $proc`

 echo -e "$proc\t$name\n$out"

 echo -e "$proc\t$name\n$out" >> ports

 echo "_________________________________________________________" >> ports

 echo " ";echo "................................................"

 arr=$(echo $out | tr ":" "\n")

 for x in $arr; do

 if [[ `echo $x | sed 's/^[-+0-9][0-9]*//' | wc -c` -eq 1 ]]; then

 if [ `cat /etc/services | grep -w $x | wc -l` -ge 1 ]; then

 nns=`cat /etc/services | grep -w $x`

 nns="/etc/services entry for Port"$x" ------- "$nns

 echo $nns

 else

 echo "/etc/services entry for Port "$x" ------- !! not found"

 fi

 fi

 done

 echo "___________________________________________________________"

 echo " "; echo " "

 echo -e "PID\tProcess Name and Port"

 echo "___________________________________________________________"

 fi

 done

echo "Open port scanner was executed"

Create Physical Volumes (PV), Volume groups (VG) and Logical Volumes (LV) using SLES11 SP3

Using pvscan, vgscan and lvscan

I'm extremely poor, so if this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug

Using Linux LV makes a lot of sense, easy to maintain, easy to grow.

The machine in question has 8 x 600GB drives

What I would normally do it to setup hardware RAID 1+0 on a bunch of disks, in the case below, the 1st 2 disks are setup as RAID 1+0, /dev/sda, this disk is used for the OS install, the left over space on the /dev/sda was used to create /dev/sda3, IE created the 3rd partition to be used as a PV, VG and LV

The 2nd disk /dev/sdb is setup as RAID 1+0 as well, this disk consist of 2 x 600GB drives, partition 3, /dev/sdb3 was also created for use as a PV, VG and LV
The 3rd disk /dev/sdc is setup as RAID 1+0 as well, this disk consist of 4 x 600GB drives, partition 3, /dev/sdc3 was also created for use as a PV, VG and LV

I can hear you saying, why the hell is he using RAID 1+0, he is loosing halve the available space. The reason for this is that the server is a high availability Telcom grade server, so redundancy etc is vital

To check what you have, use hwinfo
Ninja141:~ # hwinfo --disk --short
disk:                                                           
  /dev/sdb             HP LOGICAL VOLUME
  /dev/sdc             HP LOGICAL VOLUME
  /dev/sda             HP LOGICAL VOLUME
Ninja141:~ #

Let's do /dev/sda 1st, I have already set up partition 3 as can be seen below
Ninja141:~ # fdisk -l /dev/sda

Disk /dev/sda: 600.1 GB, 600093712384 bytes
255 heads, 63 sectors/track, 72957 cylinders, total 1172058032 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 262144 bytes / 262144 bytes
Disk identifier: 0x00012b9b

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *         512     1060351      529920   83  Linux
/dev/sda2         1060352   147862015    73400832   8e  Linux LVM
/dev/sda3       147862016  1172058031   512098008   8e  Linux LVM
Ninja141:~ #

Partition 1 /dev/sda1 and partition 2 /dev/sda2 are being used for the OS, partition 3 /dev/sda3 was created as follow
fdisk /dev/sda
n - Create p3
p - Primary Extension
t - Set to 8e
w - Remember to write

fdisk /dev/sdb
n - Create p3
p - Primary Extension
t - Set to 8e
w - Remember to write

fdisk /dev/sdc
n - Create p3
p - Primary Extension
t - Set to 8e
w - Remember to write 

Do partprobe afterwards to sync the changes
Ninja141:~ # partprobe

Create the PVs (Physical Volumes) and VGs (Volume Groups)
pvcreate -ff -y  /dev/sda3
pvcreate -ff -y  /dev/sdb3
pvcreate -ff -y  /dev/sdc3

vgcreate disk0 /dev/sda3
vgcreate disk1 /dev/sdb3
vgcreate disk2 /dev/sdc3 

After all of this, when doing vgscan and pvscan, you should see this below
Ninja141:~ # vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "disk2" using metadata type lvm2
  Found volume group "disk1" using metadata type lvm2
  Found volume group "disk0" using metadata type lvm2
  Found volume group "system" using metadata type lvm2
Ninja141:~ # pvscan
  PV /dev/sdb3   VG disk1    lvm2 [558.88 GiB / 558.88 GiB free]
  PV /dev/sdc3   VG disk2    lvm2 [1.09 TiB / 1.09 TiB free]
  PV /dev/sda3   VG disk0    lvm2 [488.37 GiB / 488,37 GiB free]
  PV /dev/sda2   VG system   lvm2 [70.00 GiB / 10.00 GiB free]
  Total: 4 [2.18 TiB] / in use: 4 [2.18 TiB] / in no VG: 0 [0   ]
Ninja141:~ # 

Create the LV's (Logical Volumes)
lvcreate -L 200G -n part0 disk0
lvcreate -L 270G -n part1 disk0
lvcreate -L 500G -n part0 disk2
lvcreate -L  82G -n part2 disk1
lvcreate -L 860G -n part1 disk1
lvcreate -L 250G -n part0 disk1

When you do a lvscan, you should be seeing something similar as below
Ninja141:~ # lvscan
  ACTIVE            '/dev/disk1/part0' [247.00 GiB] inherit
  ACTIVE            '/dev/disk1/part1' [852.64 GiB] inherit
  ACTIVE            '/dev/disk1/part2' [82.00 GiB] inherit
  ACTIVE            '/dev/disk2/part0' [495.00 GiB] inherit
  ACTIVE            '/dev/disk0/part0' [195.00 GiB] inherit
  ACTIVE            '/dev/disk0/part1' [268.00 GiB] inherit
  ACTIVE            '/dev/system/home' [1.00 GiB] inherit
  ACTIVE            '/dev/system/opt' [10.00 GiB] inherit
  ACTIVE            '/dev/system/root' [2.00 GiB] inherit
  ACTIVE            '/dev/system/srv' [7.00 GiB] inherit
  ACTIVE            '/dev/system/swap' [25.00 GiB] inherit
  ACTIVE            '/dev/system/tmp' [5.00 GiB] inherit
  ACTIVE            '/dev/system/usr' [6.00 GiB] inherit
  ACTIVE            '/dev/system/var' [4.00 GiB] inherit
Ninja141:~ #

Next step, create the filesystem, in this case I wanted ext3 filesystems
mkfs.ext3 /dev/disk0/part0
mkfs.ext3 /dev/disk0/part1
mkfs.ext3 /dev/disk1/part0
mkfs.ext3 /dev/disk1/part1
mkfs.ext3 /dev/disk1/part2
mkfs.ext3 /dev/disk2/part0

Create your directories
mkdir -p /opt/app
mkdir -p /opt/mystuff
mkdir -p /opt/mystuff/home
mkdir -p /backup
mkdir -p /postgrestablespace
mkdir -p /mongodata

Change the /etc/fstab file to mount the directories
Ninja141:~ # cat /etc/fstab
/dev/system/swap     swap                 swap       defaults              0 0
/dev/system/root     /                    ext3       defaults              1 1
/dev/sda1            /boot                ext3       acl,user_xattr        1 2
/dev/system/home     /home                ext3       defaults              1 2
/dev/system/opt      /opt                 ext3       defaults              1 2
/dev/system/srv      /srv                 ext3       defaults              1 2
/dev/system/tmp      /tmp                 ext3       defaults              1 2
/dev/system/usr      /usr                 ext3       defaults              1 2
/dev/system/var      /var                 ext3       defaults              1 2
proc                 /proc                proc       defaults              0 0
sysfs                /sys                 sysfs      noauto                0 0
debugfs              /sys/kernel/debug    debugfs    noauto                0 0
usbfs                /proc/bus/usb        usbfs      noauto                0 0
devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
/dev/disk0/part0 /opt/app ext3 defaults 1 2
/dev/disk0/part1 /opt/mystuff ext3 defaults 1 2
/dev/disk1/part0 /opt/mystuff/home ext3 defaults 1 2
/dev/disk1/part1 /backup ext3 defaults 1 2
/dev/disk1/part2 /postgrestablespace ext3 defaults 1 2
/dev/disk2/part0 /mongodata ext3 defaults 1 2
Ninja141:~ #

Last but not least, mount the directories
mount /opt/app
mount /opt/mystuff
mount /opt/mystuff/home
mount /backup
mount /postgrestablespace
mount /mongodata

If all is ok, df -h should display the following
Ninja141:~ # df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root  2.0G  368M  1.6G  20% /
udev                      48G  208K   48G   1% /dev
tmpfs                     48G     0   48G   0% /dev/shm
/dev/sda1                510M  132M  352M  28% /boot
/dev/mapper/system-home 1008M   34M  924M   4% /home
/dev/mapper/system-opt   9.9G  694M  8.7G   8% /opt
/dev/mapper/system-srv   6.9G  3.7G  3.0G  56% /srv
/dev/mapper/system-tmp   5.0G  139M  4.6G   3% /tmp
/dev/mapper/system-usr   6.0G  3.9G  1.8G  69% /usr
/dev/mapper/system-var   4.0G  300M  3.5G   8% /var
/dev/mapper/disk0-part0  192G  4.4G  178G   3% /opt/app
/dev/mapper/disk0-part1  264G  217G   34G  87% /opt/mystuff
/dev/mapper/disk1-part0  250G  200G   33G  80% /opt/mystuff/home 
/dev/mapper/disk1-part1  860G  382M  231G   1% /backup
/dev/mapper/disk1-part2   82G  382M    1G   1% /postgrestablespace
/dev/mapper/disk2-part0  500G  323M  1.8G   1% /mongodata
Ninja141:~ #

To grow and shrink the space, use yast
Go to System > Partitioner > Yes (we know what we are doing)
Select Volume Management, space bar will open the volumes, goto disk1, space bar, goto part4 and ENTER, make sure it says * Mount Point: /xxxxx
TAB to Resize
And enter the new value, in this case 400G
 Click OK and TAB to Next
 TAB to Finish and ENTER

It will take some time to complete
When done exit yast.
yast will automatically mount the /xxxxx volume again
See, told you is was easy.....
More reading :- http://en.wikipedia.org/wiki/Logical_Volume_Manager_%28Linux%29

BIG data - Setup Hadoop, HDFS, HBASE, hive - Machine setup - Part1

Setup Hadoop, HDFS, HBASE, hive

My, what an experience, if you have never done this before, be afraid, very afraid. Just joking, once you have the basics, it is fairly simple.

I have to support my lavish lifestyle, so if this helped you in any way and you have some spare BitCoins, you may donate them to me - 16tb2Rgn4uDptrEuR94BkhQAZNgfoMj3ug

 

My setup

3 x Supermicro Servers 813MTQ-600CB
  • Intel Xeon E5-2609v2 CPUs
  • 64GB Memory
  • 4 x 1TB 7200rpm SATA drives
  • 4 Port SATA raid controller
2 x HP desktop machines, simple machines,
  • 16GB RAM
  • 2 x 1TB drives, nothing fancy
All the machines where connected via ETH0 to a small switch, for heartbeat signals I used ETH1 connected to another small switch. I wanted to test load balancing using PaceMaker and CoroSync on Apache, so that's the reason for the heartbeat NICs

Keep this in mind

Host 1 - SuperNinja1 - 172.28.200.161 SuperMicro (The Master and running the NameNode. It also has H-Base and Hive installed)
Host 2 - SuperNinja2 - 172.28.200.163 SuperMicro (DataNode and a NodeManager)
Host 3 - SuperNinja3 - 172.28.200.165 SuperMicro (DataNode and a NodeManager, This node runs the Postgres instance for Hive)
Host 4 - SuperNinja4 - 172.28.200.150 HP Desktop (DataNode and a NodeManager)
Host 5 - SuperNinja5 - 172.28.200.153 HP Desktop (DataNode and a NodeManger)

Setup from my Zabbix server


So let's get cracking - Setup the machines

I use SLES 11 -SP3, once you get the OS installed, change the ETH setups as follow - I assigned the ETH0's as the normal TCP/IP and ETH1's as the heartbeats, internal IPs, you can skip the heartbeats as this was for my PaceMaker and CoroSync testing. Check the blog for a setup guide on PaceMaker and CoroSync.
The two files in question are ifcfg-eth0 and ifcfg-eth1
SuperNinja1:~ # cat /etc/sysconfig/network/ifcfg-eth0
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='172.28.200.161'
NETMASK='255.255.255.0'
GATEWAY='172.28.200.1'
NM_CONTROLLED='no'
SuperNinja1:~ # cat /etc/sysconfig/network/ifcfg-eth1
STARTMODE='auto'
BOOTPROTO='static'
IPADDR='172.16.0.5/24'
NM_CONTROLLED='no'
SuperNinja1:~ #

Set the 2 IP addresses in the 2 files, the normal TCP/IP address is 172.28.200.161 and the heartbeat IP is set to 172.16.0.5 for this host, setup all your hosts. (with different IP addresses of course)

Restart networking for the changes to take affect
SuperNinja1:~ # service network restart
Shutting down network interfaces:
    eth0      device: Intel Corporation I350 Gigabit Network Connec                                                                                                        done
    eth1      device: Intel Corporation I350 Gigabit Network Connec                                                                                                        done
Shutting down service network  .  .  .  .  .  .  .  .  .                                                                                                                   done
Hint: you may set mandatory devices in /etc/sysconfig/network/config
Setting up network interfaces:
    eth0      device: Intel Corporation I350 Gigabit Network Connec
    eth0      IP address: 172.28.200.161/24                                                                                                                                done
    eth1      device: Intel Corporation I350 Gigabit Network Connec
    eth1      IP address: 172.16.0.5/24                                                                                                                                    done
Setting up service network  .  .  .  .  .  .  .  .  .  .                                                                                                                   done
SuperNinja1:~ # ifconfig -a
eth0      Link encap:Ethernet  HWaddr 0C:C4:7A:03:70:18  
          inet addr:172.28.200.161  Bcast:172.28.200.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:65826308 errors:0 dropped:18078140 overruns:0 frame:0
          TX packets:258625520 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:13830916872 (13190.1 Mb)  TX bytes:373734605729 (356421.0 Mb)
          Memory:fb920000-fb940000 

eth1      Link encap:Ethernet  HWaddr 0C:C4:7A:03:70:19  
          inet addr:172.16.0.5  Bcast:172.16.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)
          Memory:fb900000-fb920000 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2515035 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2515035 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:621406221 (592.6 Mb)  TX bytes:621406221 (592.6 Mb)

SuperNinja1:~ #

If all it setup correctly, you should be able to ping all hosts.
SuperNinja1:~ # ping -c 4 SuperNinja2
PING SuperNinja2.xxxx.com (172.28.200.163) 56(84) bytes of data.
64 bytes from SuperNinja2.xxxx.com (172.28.200.163): icmp_seq=1 ttl=64 time=0.141 ms
64 bytes from SuperNinja2.xxxx.com (172.28.200.163): icmp_seq=2 ttl=64 time=0.146 ms
64 bytes from SuperNinja2.xxxx.com (172.28.200.163): icmp_seq=3 ttl=64 time=0.161 ms
64 bytes from SuperNinja2.xxxx.com (172.28.200.163): icmp_seq=4 ttl=64 time=0.177 ms

--- SuperNinja2.xxxx.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2997ms
rtt min/avg/max/mdev = 0.141/0.156/0.177/0.016 ms
SuperNinja1:~ # ping -c 4 SuperNinja3
PING SuperNinja3.xxxx.com (172.28.200.165) 56(84) bytes of data.
64 bytes from SuperNinja3.xxxx.com (172.28.200.165): icmp_seq=1 ttl=64 time=0.113 ms
64 bytes from SuperNinja3.xxxx.com (172.28.200.165): icmp_seq=2 ttl=64 time=0.214 ms
64 bytes from SuperNinja3.xxxx.com (172.28.200.165): icmp_seq=3 ttl=64 time=0.181 ms
64 bytes from SuperNinja3.xxxx.com (172.28.200.165): icmp_seq=4 ttl=64 time=0.173 ms

--- SuperNinja3.xxxx.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2998ms
rtt min/avg/max/mdev = 0.113/0.170/0.214/0.037 ms
SuperNinja1:~ # ping -c 4 SuperNinja4
PING SuperNinja4.xxxx.com (172.28.200.150) 56(84) bytes of data.
64 bytes from SuperNinja4.xxxx.com (172.28.200.150): icmp_seq=1 ttl=64 time=3.18 ms
64 bytes from SuperNinja4.xxxx.com (172.28.200.150): icmp_seq=2 ttl=64 time=0.169 ms
64 bytes from SuperNinja4.xxxx.com (172.28.200.150): icmp_seq=3 ttl=64 time=0.202 ms
64 bytes from SuperNinja4.xxxx.com (172.28.200.150): icmp_seq=4 ttl=64 time=0.147 ms

--- SuperNinja4.xxxx.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.147/0.925/3.185/1.305 ms
SuperNinja1:~ # ping -c 4 SuperNinja5
PING SuperNinja5.xxxx.com (172.28.200.153) 56(84) bytes of data.
64 bytes from SuperNinja5.xxxx.com (172.28.200.153): icmp_seq=1 ttl=64 time=5.96 ms
64 bytes from SuperNinja5.xxxx.com (172.28.200.153): icmp_seq=2 ttl=64 time=0.224 ms
64 bytes from SuperNinja5.xxxx.com (172.28.200.153): icmp_seq=3 ttl=64 time=0.152 ms
64 bytes from SuperNinja5.xxxx.com (172.28.200.153): icmp_seq=4 ttl=64 time=0.150 ms

--- SuperNinja5.xxxx.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 2999ms
rtt min/avg/max/mdev = 0.150/1.623/5.966/2.507 ms
SuperNinja1:~ #


Add all you hosts to all the /etc/hosts files in all the nodes
SuperNinja1:~ # cat /etc/hosts
#
# hosts         This file describes a number of hostname-to-address
#               mappings for the TCP/IP subsystem.  It is mostly
#               used at boot time, when no name servers are running.
#               On small systems, this file can be used instead of a
#               "named" name server.
# Syntax:
#    
# IP-Address  Full-Qualified-Hostname  Short-Hostname
#


# special IPv6 addresses
::1             localhost ipv6-localhost ipv6-loopback

fe00::0         ipv6-localnet

ff00::0         ipv6-mcastprefix
ff02::1         ipv6-allnodes
ff02::2         ipv6-allrouters
ff02::3         ipv6-allhosts
172.28.200.161 SuperNinja1.xxxx.com SuperNinja1
127.0.0.1 localhost.localdomain localhost
172.28.200.163 SuperNinja2.xxxx.com SuperNinja2
172.28.200.165 SuperNinja3.xxxx.com SuperNinja3
172.28.200.150 SuperNinja4.xxxx.com SuperNinja4
172.28.200.153 SuperNinja5.xxxx.com SuperNinja5
SuperNinja1:~ #

HDFS works better if there in no LV setup, so what I did is to fdisk a partition and named it /hdfsfilesystemx, in the case of SuperNinja1, I also created a LV named /data, this is where the RAW files to be processed will be loaded to, see my blog post on how to create PVs, VGs and LVs. http://kingratlinux.blogspot.com/2014/06/create-physical-volumes-pv-volume.html

The 1st thing, see how many disks are in the machine
SuperNinja1:~ # hwinfo --disk --short
disk:                                                           
  /dev/sdd             SMC2108
  /dev/sda             SMC2108
  /dev/sdc             SMC2108
  /dev/sdb             SMC2108
SuperNinja1:~ #

Use fdisk to create the disks, on the 1st disk /dev/sda1 and /dev/sda2 is used for the OS, I then created /dev/sda3 for the /data slice
SuperNinja:~ #fdisk -l /dev/sda

Disk /dev/sda: 999.0 GB, 998999326720 bytes
255 heads, 63 sectors/track, 121454 cylinders, total 1951170560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00079fbc

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1060863      529408   83  Linux
/dev/sda2         1060864   147861503    73400320   8e  Linux LVM
/dev/sda3       147861504  1951170559   901654528   8e  Linux LVM
SuperNinja:~ #

For /dev/sdb, /dev/sdc and /dev/sdd, no LV's are created, just a raw slice that's mounted, Note the type is set to Linux, not Linux LVM as /dev/sda3
SuperNinja1:~ # fdisk -l /dev/sdb

Disk /dev/sdb: 999.0 GB, 998999326720 bytes
192 heads, 17 sectors/track, 597785 cylinders, total 1951170560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x25303156

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb3            2048  1951170559   975584256   83  Linux
SuperNinja1:~ # fdisk -l /dev/sdc

Disk /dev/sdc: 999.0 GB, 998999326720 bytes
192 heads, 17 sectors/track, 597785 cylinders, total 1951170560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x43492116

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc3            2048  1951170559   975584256   83  Linux
SuperNinja1:~ # fdisk -l /dev/sdd

Disk /dev/sdd: 999.0 GB, 998999326720 bytes
192 heads, 17 sectors/track, 597785 cylinders, total 1951170560 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x94721d15

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd3            2048  1951170559   975584256   83  Linux
SuperNinja1:~ # df -h
Filesystem                  Size  Used Avail Use% Mounted on
/dev/mapper/system-root     2.0G  368M  1.6G  20% /
udev                         32G  200K   32G   1% /dev
tmpfs                        32G     0   32G   0% /dev/shm
/dev/sda1                   509M  132M  352M  28% /boot
/dev/mapper/system-home    1008M   40M  918M   5% /home
/dev/mapper/system-opt      9.9G  1.5G  8.0G  16% /opt
/dev/mapper/system-srv      6.9G  4.7G  1.9G  72% /srv
/dev/mapper/system-tmp      5.0G  144M  4.6G   3% /tmp
/dev/mapper/system-usr      6.0G  4.0G  1.7G  71% /usr
/dev/mapper/system-var      4.0G  377M  3.4G  10% /var
/dev/sdb3                   916G  152G  718G  18% /hdfsfilesystem1
/dev/sdc3                   916G  150G  720G  18% /hdfsfilesystem2
/dev/sdd3                   916G  152G  718G  18% /hdfsfilesystem3
/dev/mapper/datadisk-part0  493G  227G  241G  49% /data
SuperNinja1:~ #

Disable IPV6 if you not using it
Add the following lines to /etc/sysctl.conf
# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
The file should look like this below, reboot the server after the changes.

SuperNinja1:/home # cat /etc/sysctl.conf
# Disable response to broadcasts.
# You don't want yourself becoming a Smurf amplifier.
net.ipv4.icmp_echo_ignore_broadcasts = 1
# enable route verification on all interfaces
net.ipv4.conf.all.rp_filter = 1
# enable ipV6 forwarding
#net.ipv6.conf.all.forwarding = 1
# increase the number of possible inotify(7) watches
fs.inotify.max_user_watches = 65536
# avoid deleting secondary IPs on deleting the primary IP
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
SuperNinja1:/home # cat /proc/sys/net/ipv6/conf/all/disable_ipv6
0
SuperNinja1:/home # reboot
Broadcast message from root (pts/0) (Wed May 28 10:27:23 2014):
The system is going down for reboot NOW!
SuperNinja1:/home #

Once the machine is up, check if IPV6 has been disabled, it should be 1

Xshell:\> ssh root@172.28.200.161
Connecting to 172.28.200.161:22...
Connection established.
Escape character is '^@]'.
WARNING! The remote SSH server rejected X11 forwarding request.
Last login: Wed May 28 09:01:25 2014 from kingrat
SuperNinja1:~ # cat /proc/sys/net/ipv6/conf/all/disable_ipv6
1
SuperNinja1:~ #