This article will explore some Hadoop basic commands that help in our day-to-day activities.
Hadoop file system shell commands are organized in a similar way to Unix/Linux environments. For people who work with Unix shell, it is easy to turn to Hadoop shell commands. Such commands communicate with HDFS and other Hadoop-supported file systems.
1) List-out the contents of the directory.
2) Create or delete a directory
To check the status of safemode
hadoop dfsadmin -safemode get
To change the safemode to ON
hadoop dfsadmin -safemode enter
To change the safemode to OFF / or to
leave the safemode
hadoop fs -put <sourcefilepath> <destinationfilepath>
Examples:
hadoop fs -put Desktop/Documents/emp.txt /user/cloudera/empdir
hadoop fs -copyFromLocal Desktop/Documents/emp.txt /user/cloudera/emp.txt
To know more about "copyFromLocal", "put" "copyToLocal" and "get", please click here.
4) Read the file
hadoop fs -cat /user/cloudera/emp.txt
The above command helps in reading the file however, one has to avoid using this command for large files since it can impact on I/O. This command is good for
files with small data.
5) Copy the file from HDFS to Local
System
This is reverse scenario of Put & CopyFromLocal. For more information click here.
6) Move the file from one HDFS location
to another (HDFS location)
Hadoop fs -mv emp.txt testDir
Hadoop fs -mv testDir tesDir2
Hadoop fs -mv testDir2/testDir /user/cloudera
7) Admin Commands
To view the config settings
go
to --> computer-browse folder-filesystem-->etc-->hadoop-->conf-->hdfs-site.xml
To
change the default configuration values such as dfs.replication or dfs.blocksize from hdfs-site.xml, use the sudo commands
Click "I" for insert option or to bring it in edit mode.
Modify the values as per your
requirement.
To save and exit :wq!
hadoop fs -tail [-f]
<file>
The Hadoop fs shell tail command shows the last 1KB of a file on console or stdout.
Fantastic piece of information. Thank you! :)
ReplyDelete