Thursday, September 25, 2014

Hadoop Installation

Running Hadoop on Ubuntu Linux (Single-Node Cluster)

  In this tutorial I will describe the required steps for setting up a pseudo-distributed, single-node Hadoop cluster backed by the Hadoop Distributed File System, running on Ubuntu Linux.

Are you looking for the multi-node cluster tutorial? Just head over there.
Hadoop is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm. Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.
The main goal of this tutorial is to get a simple Hadoop installation up and running so that you can play around with the software and learn more about it.
This tutorial has been tested with the following software versions:

Prerequisites

 Sun Java7 or above

$ sudo apt-get update  
$ sudo apt-get install openjdk-7-jdk
$ java -version
 

Adding a dedicated Hadoop system user

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser

Configuring SSH

 Hadoop requires SSH access to manage its nodes, i.e. remote machines plus your local machine if you want to use Hadoop on it (which is what we want to do in this short tutorial). For our single-node setup of Hadoop, we therefore need to configure SSH access to localhost for the hduser user we created in the previous section.
I assume that you have SSH up and running on your machine and configured it to allow SSH public key authentication.

First, we have to generate an SSH key for the hduser user.

user@ubuntu:~$ su - hduser
hduser@ubuntu:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hduser/.ssh/id_rsa):
Created directory '/home/hduser/.ssh'.
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
The key fingerprint is:
9b:82:ea:58:b4:e0:35:d7:ff:19:66:a6:ef:ae:0e:d2 hduser@ubuntu
The key's randomart image is:
[...snipp...]
hduser@ubuntu:~$
 
The second line will create an RSA key pair with an empty password. Generally, using an empty password is not recommended, but in this case it is needed to unlock the key without your interaction (you don’t want to enter the passphrase every time Hadoop interacts with its nodes).
Second, you have to enable SSH access to your local machine with this newly created key.


hduser@ubuntu:~$ cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
The final step is to test the SSH setup by connecting to your local machine with the hduser user. The step is also needed to save your local machine’s host key fingerprint to the hduser user’s known_hosts file. If you have any special SSH configuration for your local machine like a non-standard SSH port, you can define host-specific SSH options in $HOME/.ssh/config (see man ssh_config for more information).

hduser@ubuntu:~$ ssh localhost
The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is d7:87:25:47:ae:02:00:eb:1d:75:4f:bb:44:f9:36:26.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Linux ubuntu 2.6.32-22-generic #33-Ubuntu SMP Wed Apr 28 13:27:30 UTC 2010 i686 GNU/Linux
Ubuntu 10.04 LTS
[...snipp...]
hduser@ubuntu:~$
 
If the SSH connect should fail, these general tips might help:
  • Enable debugging with ssh -vvv localhost and investigate the error in detail.
  • Check the SSH server configuration in /etc/ssh/sshd_config, in particular the options PubkeyAuthentication (which should be set to yes) and AllowUsers (if this option is active, add the hduser user to it). If you made any changes to the SSH server configuration file, you can force a configuration reload with sudo /etc/init.d/ssh reload.

Hadoop

Installation

Download Hadoop from the Apache Download Mirrors and extract the contents of the Hadoop package to a location of your choice. I picked /usr/local/hadoop. Make sure to change the owner of all the files to the hduser user and hadoop group, for example:
 
$ cd /usr/local
$ sudo tar xzf hadoop-1.0.3.tar.gz
$ sudo mv hadoop-1.0.3 hadoop
$ sudo chown -R hduser:hadoop hadoop
 

Update $HOME/.bashrc

Add the following lines to the end of the $HOME/.bashrc file of user hduser. If you use a shell other than bash, you should of course update its appropriate configuration files instead of .bashrc.


# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/open-jdk7

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin
 

Configuration

Our goal in this tutorial is a single-node setup of Hadoop. More information of what we do in this section is available on the Hadoop Wiki.

hadoop-env.sh

The only required environment variable we have to configure for Hadoop in this tutorial is JAVA_HOME. Open conf/hadoop-env.sh in the editor of your choice (if you used the installation path in this tutorial, the full path is /usr/local/hadoop/conf/hadoop-env.sh) and set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory.
Change

conf/hadoop-env.sh
1
2
# The java implementation to use.  Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun
to


conf/hadoop-env.sh

# The java implementation to use.  Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun
 

conf/*-site.xml

In this section, we will configure the directory where Hadoop will store its data files, the network ports it listens to, etc. Our setup will use Hadoop’s Distributed File System, HDFS, even though our little “cluster” only contains our single local machine.
You can leave the settings below “as is” with the exception of the hadoop.tmp.dir parameter – this parameter you must change to a directory of your choice. We will use the directory /app/hadoop/tmp in this tutorial. Hadoop’s default configurations use hadoop.tmp.dir as the base temporary directory both for the local file system and HDFS, so don’t be surprised if you see Hadoop creating the specified directory automatically on HDFS at some later point.
Now we create the directory and set the required ownerships and permissions:




$ sudo mkdir -p /app/hadoop/tmp
$ sudo chown hduser:hadoop /app/hadoop/tmp
# ...and if you want to tighten up security, chmod from 755 to 750...
$ sudo chmod 750 /app/hadoop/tmp
If you forget to set the required ownerships and permissions, you will see a java.io.IOException when you try to format the name node in the next section).
Add the following snippets between the ... tags in the respective configuration XML file.
In file conf/core-site.xml:






In file conf/mapred-site.xml:




 


  
 
  In file conf/hdfs-site.xml:



   





Formatting the HDFS filesystem via the NameNode

The first step to starting up your Hadoop installation is formatting the Hadoop filesystem which is implemented on top of the local filesystem of your “cluster” (which includes only your local machine if you followed this tutorial). You need to do this the first time you set up a Hadoop cluster.
Do not format a running Hadoop filesystem as you will lose all the data currently in the cluster (in HDFS)!
To format the filesystem (which simply initializes the directory specified by the dfs.name.dir variable), run the command



hduser@ubuntu:~$ /usr/local/hadoop/bin/hadoop namenode -format
The output will look like this:


hduser@ubuntu:/usr/local/hadoop$ bin/hadoop namenode -format
10/05/08 16:59:56 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ubuntu/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
10/05/08 16:59:56 INFO namenode.FSNamesystem: fsOwner=hduser,hadoop
10/05/08 16:59:56 INFO namenode.FSNamesystem: supergroup=supergroup
10/05/08 16:59:56 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/05/08 16:59:56 INFO common.Storage: Image file of size 96 saved in 0 seconds.
10/05/08 16:59:57 INFO common.Storage: Storage directory .../hadoop-hduser/dfs/name has been successfully formatted.
10/05/08 16:59:57 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
hduser@ubuntu:/usr/local/hadoop$

Starting your single-node cluster

Run the command:



hduser@ubuntu:~$ /usr/local/hadoop/bin/start-all.sh
This will startup a Namenode, Datanode, Jobtracker and a Tasktracker on your machine.
The output will look like this:


hduser@ubuntu:/usr/local/hadoop$ bin/start-all.sh
starting namenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop/bin/../logs/hadoop-hduser-tasktracker-ubuntu.out
hduser@ubuntu:/usr/local/hadoop$
A nifty tool for checking whether the expected Hadoop processes are running is jps (part of Sun’s Java since v1.5.0). See also How to debug MapReduce programs.


hduser@ubuntu:/usr/local/hadoop$ jps
2287 TaskTracker
2149 JobTracker
1938 DataNode
2085 SecondaryNameNode
2349 Jps
1788 NameNode
You can also check with netstat if Hadoop is listening on the configured ports.




hduser@ubuntu:~$ sudo netstat -plten | grep java
tcp   0  0 0.0.0.0:50070   0.0.0.0:*  LISTEN  1001  9236  2471/java
tcp   0  0 0.0.0.0:50010   0.0.0.0:*  LISTEN  1001  9998  2628/java
tcp   0  0 0.0.0.0:48159   0.0.0.0:*  LISTEN  1001  8496  2628/java
tcp   0  0 0.0.0.0:53121   0.0.0.0:*  LISTEN  1001  9228  2857/java
tcp   0  0 127.0.0.1:54310 0.0.0.0:*  LISTEN  1001  8143  2471/java
tcp   0  0 127.0.0.1:54311 0.0.0.0:*  LISTEN  1001  9230  2857/java
tcp   0  0 0.0.0.0:59305   0.0.0.0:*  LISTEN  1001  8141  2471/java
tcp   0  0 0.0.0.0:50060   0.0.0.0:*  LISTEN  1001  9857  3005/java
tcp   0  0 0.0.0.0:49900   0.0.0.0:*  LISTEN  1001  9037  2785/java
tcp   0  0 0.0.0.0:50030   0.0.0.0:*  LISTEN  1001  9773  2857/java
hduser@ubuntu:~$
If there are any errors, examine the log files in the /logs/ directory.

Stopping your single-node cluster

Run the command




hduser@ubuntu:~$ /usr/local/hadoop/bin/stop-all.sh
to stop all the daemons running on your machine.
 

Hadoop Web Interfaces

Hadoop comes with several web interfaces which are by default (see conf/hadoop-default.xml) available at these locations:
These web interfaces provide concise information about what’s happening in your Hadoop cluster. You might want to give them a try.

NameNode Web Interface (HDFS layer)

The name node web UI shows you a cluster summary including information about total/remaining capacity, live and dead nodes. Additionally, it allows you to browse the HDFS namespace and view the contents of its files in the web browser. It also gives access to the local machine’s Hadoop log files.
By default, it’s available at http://localhost:50070/.


Tuesday, September 16, 2014

Ubuntu password reset

Power on the PC
On GRUB Menu display----------------------
First, you have to reboot into recovery mode.
If you have a single-boot (Ubuntu is the only operating system on your computer), to get the boot menu to show, you have to hold down the Shift key during bootup.
If you have a dual-boot (Ubuntu is installed next to Windows, another Linux operating system, or Mac OS X; and you choose at boot time which operating system to boot into), the boot menu should appear without the need to hold down the Shift key.


From the boot menu, select recovery mode, which is usually the second boot option.




After you select recovery mode and wait for all the boot-up processes to finish, you'll be presented with a few options. In this case, you want the Drop to root shell prompt option so press the Down arrow to get to that option, and then press Enter to select it.
The root account is the ultimate administrator and can do anything to the Ubuntu installation (including erase it), so please be careful with what commands you enter in the root terminal.
In recent versions of Ubuntu, the filesystem is mounted as read-only, so you need to enter the follow command to get it to remount as read-write, which will allow you to make changes:



mount -o rw,remount /


If you have forgotten your username as well, type
ls /home
That's a lowercase L, by the way, not a capital i, in ls. You should then see a list of the users on your Ubuntu installation. In this case, I'm going to reset Susan Brownmiller's password. To reset the password, type
passwd username
where username is the username you want to reset. In this case, I want to reset Susan's password, so I type
passwd susan
You'll then be prompted for a new password. When you type the password you will get no visual response acknowledging your typing. Your password is still being accepted. Just type the password and hit Enter when you're done. You'll be prompted to retype the password. Do so and hit Enter again.
Now the password should be reset. Type
exit
to return to the recovery menu.

 
After you get back to the recovery menu, select resume normal boot, and use Ubuntu as you normally would—only this time, you actually know the password!


Another alternative approach if the above approach is not working....................
http://www.faqforge.com/linux/reset-root-password-ubuntu-linux-without-cd/

Sunday, July 20, 2014

Ubuntu 14.04 Flicker problems............

Ubuntu 14.04 Problems

Screen Flicker----------------

Working! Just installing latest 3.15 kernel (I tried 3.15 RC8 and 3.15.3 fromhttp://kernel.ubuntu.com/~kernel-ppa/mainline/v3.15.3-utopic/.
It works like a charm!
files to download 3.15.3 for 64-bit system:
  • linux-headers-3.15.3-031503-generic_3.15.3-031503.201407010040_amd64.deb
  • linux-headers-3.15.3-031503_3.15.3-031503.201407010040_all.deb
  • linux-image-3.15.3-031503-generic_3.15.3-031503.201407010040_amd64.deb
files to download 3.15.3 for 32-bit system:
  • linux-headers-3.15.3-031503-generic_3.15.3-031503.201407010040_i386.deb
  • linux-headers-3.15.3-031503_3.15.3-031503.201407010040_i386.deb
  • linux-image-3.15.3-031503-generic_3.15.3-031503.201407010040_i386.deb
Type in terminal:
sudo dpkg -i *.deb

Thursday, April 17, 2014

Rack awareness and java in ubuntu

best :---
https://developer.yahoo.com/hadoop/tutorial/module2.html

http://ofirm.wordpress.com/2014/01/09/exploring-the-hadoop-network-topology/

 http://fendertech.blogspot.in/2013/07/hadoop-rack-awareness-104.html
------------------------------------------------------------------------------------------------------------
http://www.michael-noll.com/blog/2011/03/28/hadoop-space-quotas-hdfs-block-size-replication-and-small-files/

http://stackoverflow.com/questions/17799535/hdfs-reduced-replication-factor

http://stackoverflow.com/questions/20119320/when-i-store-files-in-hdfs-will-they-be-replicated

http://architects.dzone.com/articles/how-hdfs-does-replication

http://coenraets.org/blog/2012/10/real-time-web-analytics-with-node-js-and-socket-io/

http://bradhedlund.com/2011/09/10/understanding-hadoop-clusters-and-the-network

https://issues.apache.org/jira/browse/HADOOP-692

------------------------------------------------------------------------------------------------------------
autoreconf error in hadoop build--------------
apt-get -y install maven build-essential autoconf automake libtool cmake zlib1g-dev pkg-config libssl-dev
------------------------------------------------------------------------------------------------------------

    Download Java SE 7 JDK for Linux x86 archive. At the time of writing, the file I'm using is jdk-7u21-linux-i586.tar.gz, but the filename will change as updates are released.
    Apparently there is no longer a jvm folder, so create one.

    sudo mkdir /usr/lib/jvm

    Move the archive to the jvm folder

    sudo mv jdk-7u21-linux-i586.tar.gz /usr/lib/jvm/

    Change to the jvm folder and extract the JDK from the archive

    cd /usr/lib/jvm

    sudo tar zxvf jdk-7u21-linux-i586.tar.gz

    Everything will be extracted to a new jdk1.7.0_21 folder and you can delete
the archive file now.
 
sudo update-alternatives --install "/usr/bin/javac" "javac" "/usr/lib/jvm/java-6.31-oracle/bin/javac" 1
sudo update-alternatives --config javac
 
similarly for java use java

sudo update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/java-6.31-oracle/jre/bin/java" 1
sudo update-alternatives --config java

Double-check the version
    java -version

from----------- http://www.printandweb.ca/2013/04/manually-install-oracle-jdk-7-for.html


http://www.linuxquestions.org/questions/linux-newbie-8/lost-ubuntu-after-installing-opensuse-932336/

Sunday, April 6, 2014

ssh settings in hadoop...........(passwordless login)

on the host you will connect FROM:
generate the public private keys
> ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
copy the public key to every host you will connect TO:
> scp ~/.ssh/id_dsa.pub my_user_id@1.2.3.4:~/.ssh/id_dsa.pub
* this should prompt you for a password
shell into the remote machine
> ssh my_user_id@1.2.3.4
authorize the key by adding it to the list of authorized keys
> cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
log out of the current shell
> exit
test that you can log in with no password

if problems persist plz refer:-
http://allthingshadoop.com/?s=Hadoop+Cluster+Setup%2C+SSH+Key+Authentication

http://ambari.apache.org/1.2.0/installing-hadoop-using-ambari/content/ambari-chap1-5-2.html

 

Wednesday, March 26, 2014

proxy settings in oracle solaris 11


  for gateway settings : 
   route add default 10.1.0.1
  for proxy settings :
  1 export http_proxy=http://nihita-me-cs-2010:abc123@10.25.0.4:3128
  2 export https_proxy=http://nihita-me-cs-2010:abc123@10.25.0.4:3128
  3 svccfg -s system-repository:default setprop config/http_proxy = http://nihita-me-cs-2010:abc123@10.25.0.4:3128
  4 svccfg -s system-repository:default setprop config/https_proxy = http://nihita-me-cs-2010:abc123@10.25.0.4:3128
 5 svcadm refresh system-repository
 6 svcprop -p config/http_proxy system-repository
 7 svcprop -p config/https_proxy system-repository

http://www.thewireframecommunity.com/node/29

http://www.cyberciti.biz/faq/linux-unix-set-proxy-environment-variable/

http://gurkulindia.com/main/manpages/solaris-11-image-packaging-systems-quick-reference/#

http://docs.oracle.com/cd/E23824_01/html/821-1460/glqjr.html



  

Tuesday, March 25, 2014

MySQL Utilities

mysqldump is an effective tool to backup MySQL database. It creates a *.sql file with DROP table, CREATE table and INSERT into sql-statements of the source database. To restore the database,  execute the *.sql file on destination database.  For MyISAM, use mysqlhotcopy method that we explained earlier, as it is faster for MyISAM tables.

backup: # mysqldump -u root -p[root_password] [database_name] > dumpfilename.sql

restore:# mysql -u root -p[root_password] [database_name] < dumpfilename.sql
 
 
Random Data 

http://stackoverflow.com/questions/10788285/how-to-update-insert-random-dates-in-sql-within-a-specified-date-range



Thursday, March 6, 2014

grub2 installation

http://howtoubuntu.org/how-to-repair-restore-reinstall-grub-2-with-a-ubuntu-live-cd#.UxgyAoWukRs

steps:
sudo mount /dev/sdXY /mnt
Now bind the directories that grub needs access to to detect other operating systems, like so:
Now we jump into that using chroot.

Now install, check, and update grub.
This time you only need to add the drive letter (usually a) to replace X, for example: grub-install /dev/sda, grub-install –recheck /dev/sda.
grub-install /dev/sdXgrub-install --recheck /dev/sdX
update-grub 

Now grub is back, all that is left is to exit the chrooted system and unmount everything.
exit && sudo umount /mnt/dev && sudo umount /mnt/dev/pts && sudo umount /mnt/proc && sudo umount /mnt/sys && sudo umount /mnt

Wednesday, February 12, 2014

http://howtofindsolution.blogspot.in/2012/11/how-to-install-and-configure-latest.html

Thursday, January 30, 2014


run system calls through java

import java.io.*;
public class Hello {

    public static void main(String args[]) {
        try {
            Runtime r = Runtime.getRuntime();
Process p = r.exec("ls -l");
p.waitFor();
BufferedReader b = new BufferedReader(new InputStreamReader(p.getInputStream()));
String line = "";

while ((line = b.readLine()) != null) {
  System.out.println(line);
}
   }     catch(Exception e1) {}
       

        System.out.println("finished.");
    }
}

Tuesday, January 28, 2014

hadoop help

http://b2ctran.wordpress.com/category/hadoop/
http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
http://b2ctran.wordpress.com/2013/08/26/install-hadoop-1-1-2-on-ubuntu-12-04/
http://technsolution.blogspot.in/2013/08/hadoop-installation-on-ubuntu-1204.html