passwd: Authentication token manipulation error

Error:
I was trying to login to one of test machines, it was asking me to change my expired password and then it show me this error and kick me out:

WARNING: Your password has expired.
You must change your password now and login again!
Changing password for user abc.
Changing password for abc
(current) UNIX password: 
New UNIX password: 
Retype new UNIX password: 
passwd: Authentication token manipulation error

Troubleshooting:
I googled the error and found all answers are centered around remounting "/" filesystem with READ WRITE mode:
# mount -rw -o remount /

But I was sure that was not my case, but at least I know it's some how related to "/" filesystem, later I managed to login to the server using another account and found "/" filesystem is 100% full , the thing was not allowing any user to update /etc/shadow file, while changing their password.

Solution:
After freeing up some space under "/" file system I managed to change the password and login with that user successfully.

Configuring JUMBO Frames

What is Jumbo Frames:

Jumbo frames are the frames bigger than the default Ethernet frames (1500 bytes).
Jumbo frames (MTU) max size is 9000 bytes.
Jumbo frames are available on gigabit networks only.

Why Jumbo Frames:


Jumbo frames can increase the network throughput and reduce the CPU cycle.
Jumbo frames is recommended on networks where the majority of it's traffic is from large files/packets also it's recommended on NAS network and on interconnect network within Oracle RAC servers.
Every network device (NICs, switches, routers) should support jumbo frames as you should configure jumbo frames on all of these devices, although it's available in most network devices it's not supported by some NICs (like Intel ones) & switches (performing a test before you go production).
Before you setup jumbo frames you should know that manufacturers didn't assign the default frame 1500 byte arbitrarily, so before you alter the default frame you should have a strong justifications to do so.

How to configure Jumbo Frames:

Suppose that eth1 is the NIC connected to your NAS switch and you want to enable jumbo frames on it:
  # vi /etc/sysconfig/network-scripts/ifcfg-eth1
  #Add the following parameter (default value is 1500):
  MTU=9000

Restart the NIC:
 # ifdown eth1; ifup eth1
 # ifconfig -a eth1  
=> you should see the value of MTU=9000 in the output.

Testing JUMBO Frames:

There are two ways to test if your jumbo frame is working or not:
Note: Jumbo frames should be enabled on the NAS switch before doing this test or the test will fail.

1) Using traceroute command:
  # traceroute -F nas-storage 9000
    traceroute to nas-storage (192.168.110.2), 30 hops max, 9000 byte packets
    1  nas-storage (192.168.110.2)  0.269 ms  0.238 ms  0.226 ms

   =>This test was OK
   =>In case you get this message "Message too long" try to reduce the MTU till that message stop appearing.

2) Using ping command: [With MTU=9000 test with 8972 bytes not more]
  # ping -c 2 -M do -s 8972 nas-storage
    1480 bytes from nas-storage (192.168.110.2): icmp_seq=0 ttl=64 time=0.245 ms  =>This test was OK.
   =>In case you got this message "Frag needed and DF set (mtu = 9000)" reduce the MTU till you get the proper ping response.


Shredding Files | Disk In Linux

Sometimes it comes as a requirement to shred and delete a file contains sensitive data to minimize the capability to restore this file back from the hard disk after it's deletion.


# shred -fuzv -n 30  /home/oracle/aa.txt


-f ........Change permissions to allow writing.
-u ........remove the file after shredding.
-z ........final overwrite with zeros to hide shredding.
-v ........verbose mode.
-n 30 ...Shred file aa.txt 30 time (default is 25 time).

Note: Shredding will be effective-less if the Filesystem is (RIESER, EXT3/4, RAID system and Compressed filesystems) because of journaling option check /etc/fstab for "data=" options, if "data=journal" which journals file data in addition to metadata then the shredding effeciveness is low, if "data=ordered" which is the default or "data=writeback" then shredding will work fine with you.

Last but not least, shredding your data in this way file by file is not a guaranteed way,
the guaranteed way I think is to shred the whole disk or at least the whole partition.

e.g. Shredding /dev/sda3 partition
# shred -fzv -n 30  /dev/sda3

In case you will shred the whole disk replace /dev/sda3 with /dev/sda and so on.

As an extra layer of wiping the hard disk/partition you can use dd command :
dd command will help wiping your disk by overwrite the whole disk with zeros, it will perform faster than shred command:

# dd  if=/dev/zero  of=/dev/sda  bs=1048576

In case you want to wipe a partition or file, replace /dev/sda with the partition you want to wipe e.g. /dev/sda3 or with the file you want to wipe e.g. /home/oracle/aa.txt

Use the above commands with caution.


Files Encryption

There are many ways to encrypt files under Linux, in this post I'll talk about gpg utility to easily encrypt a file under Linux.

Encrypt a file with password:

# gpg -c  aa.txt
Enter passphrase:
Repeat passphrase:


Note: a new encrypted file will be created with name aa.txt.gpg.
Next step you should remove the original file aa.txt to keep only the encrypted version of the file on the system.

Any other user [even if have sufficient privileges on the file] will not be able to view the contents of the encrypted file, it will appear for him in a meaning less letters like this:
e.g.
# su - scott
# ls -l aa.txt.gpg
 -rw-r--r-- 1 scott   scott        164 Jul 29 17:46  aa.txt.gpg
# cat aa.txt.gpg
<8c>^M^D^C^C^B^HéÔt÷Ì^M<89>`É<93>U?<


To decrypt an encrypted file:
# gpg  aa.txt.gpg

gpg: CAST5 encrypted data
Enter passphrase:
gpg: CAST5 encrypted data
gpg: encrypted with 1 passphrase
gpg: WARNING: message was not integrity protected


Note: It will create a new file aa.txt and will keep the encrypted file, so you can remove it later.


You can watch this video to see a gpg tutorial:
http://www.youtube.com/watch?v=T0duUXxnVpg#t=58



Zipping | Compressing Files in Linux

TAR Command


TAR command is commonly used to archive directories and files in one archive file.

Tar a directory:
# tar cvfp  <archive file name>  <directory to be archived>
e.g.
# tar cvfp  /backupdisk/logs.tar  /home/oracle/Logs

c  --> Create a new archive
v --> verbose mode
f  --> use archive file
p --> preserve permissions

Tar & compress:
# tar cvfpZ  /backupdisk/logs.tar  /home/oracle/Logs
Z  --> Compress the tar file using Compress program [most low compression ration]

# tar cvfpz  /backupdisk/logs.tar  /home/oracle/Logs
z  --> Compress the tar file using gzip compressor [good compression ratio in a reasonable time]

# tar cvfpj  /backupdisk/logs.tar  /home/oracle/Logs
j  --> Compress the tar file using bzip2 compressor [Best compression ratio with longest time]

Extract the tar file on the same tar file location:
# cd /backupdisk/
# tar xvfp  logs.tar
--> Extract
--> verbose mode
f   --> file
--> preserve permissions

Extract tar file under different location than the tar file location:
Step under the location you want to extract the tar file in:
# cd /u01
Extract the tar file with providing it's full path:
# tar xvpf  /backupdisk/ORACLE_HOME/ora11g.tar

Extract specific file from the tar file:
e.g. Extract only file log1.txt from inside logs.tar:
# tar  xvf  logs.tar   log1.txt
# tar zxvf  logs.tar.gz log1.txt   
    --> In case the tar file is compressed.

Extract specific files that have extension .log from logs.tar::
# tar -xvf  logs.tar    --wildcards --no-anchored '*.log'
# tar -zxvf logs.tar.gz --wildcards --no-anchored '*.log'


View contents of a Tar file without extracting the tar file:
# tar tvf  logs.tar | less              --> Shows the contents of an archive.
# tar tvf  logs.tar | grep .txt       --> Show only files with .txt extension.
# tar ztvf logs.tar.gz                  --> Show the contents of a compressed tar file.


GZIP Command

gzip command is commonly used to compress files, gzip replaces the original files with the compressed ones at the end of compression operation. It also allow controlling the compression ratio. Compression ratio from 1 [the lowest | the fastest time] to 9 [the highest | the longest time].
The files will be compressed and have .gz extension.

Note: In general using high compression ratio will consume higher CPU resources, this is why when using a low compression ration consumes less CPU resources and finish faster than using high compression ratio that consumes more CPU resources and takes longer time to finish.

Examples:
# gzip myfile
# gzip -1 *.arc 
--> compress all files with .arc extension with the lowest compression ratio in the fastest time.
# gzip -9 *.log  --> compress all files with .log extension with the highest compression ratio in the longest time.

De-Compress  a .gz file:
# gunzip -f  myfile.gz
-f  --> will overwrite the original file having name myfile if it's already exist.


BZIP2 Command

bzip2is similar to gzip command it compress and replace the original file/directory.
bzip2 provides 9 compression levels same like gzip but bzip2 compression ratios provide higher compression than gzip but gzip is much much faster than bzip2.

Compress and replace file with the highest compression ratio:
# bzip2 -c  -9   log1.log  

Compress and keep the original file with the highest compression ratio:
# bzip2 -c  -9   log1.log  >  log1.log.bz2

De-Compress a bzip2 file:
# bzip2 -d  log1.log.bz2


ZIP Command

Similar to tar command, it creates a zip file for one or more files or for directory and keep the original files un-touched.
It provides 9 levels of compression similar to gzip command.

Compress multiple files in one file:
# zip -9  all_logs.zip   log1.log  log2.log  log3.log 
 -9 is the highest compression ratio

Compress directory:
# zip -r   backup.zip   backup

Unzip a file:
# unzip backup.zip

Zip & Encrypt:

Zip group of files with .aud extension in a compressed file and encrypt it with password:
# zip -e  audit.zip  *.aud
Enter password:
Verify password:


Unzip an encrypted compressed file:
# unzip -P <password> audit.zip

Note: Some programs available on the internet can easily reveal the password for protected files by zip command, it's recommended to use gpg utility to encrypt the compressed file.




Searching Files in Linux

Find command

The following are the Most used search templates for find command:

Search for files only starting with "orapw" pattern:
# find . -name "orapw*" -type f
find / : will search / path.
find . : will search under current working directory.

Search with Ignoring case sensitivity:
# find . -iname "file*"
./file4
./FILE1


Search and Ignore "permission denied" error inside directories not owned by you:
# find /u01/oracle -iname alert_orcl.log  -print 2>/dev/null
-iname                    ignoring case sensitivity.
-print 2>/dev/null   don't print out errors

find the biggest 10 files under the current directory:
# find . -ls | sort -nrk7 | head -10

Search for files bigger than 10m under the current working directory:
# find . -size +10M -exec ls -lh {} \; | awk '{print $5 , $3 , $8 , $9}'

Search for files accessed in the last two days:
# find . -atime 2

Search for files modified in the last two days:
# find . -mtime


LOCATE command

"locate" is much faster than "find" because it searches the database not the Filesystem.
"locate" command is using database file: /var/lib/mlocate/mlocate.db
"locate" command database config file is: /etc/updatedb.conf

In order to make "locate" command more effective, You have to update it's database in a regular basis using this command:
# updatedb

The elapsed time for this command to finish depends on the database if it's recently updated or far old.
Note: In case you want to skip some directories from being scanned and added to the database, you can add the following parameter inside the config fie /etc/updatedb.conf:
e.g. In case you want to skip these directories /tmp  /var/spool  /media:
PRUNEPATHS="/tmp /var/spool /media"

How to use LOCATE command:
# locate -i OC_DATA.dbf
-i     allow using insensitive words + search incomplete words without the need to provide"*".


Searching specific words inside files

Print file names along with the lines that contain keyword "ORA-" inside: [search one level deep]
# grep "ORA-" /home/oracle/*

Print lines contain "ORA-" keyword inside files under /home/oracle along with line number.
# grep -n "ORA-" /home/oracle/*

Print file names that have keyword "dump" inside: [search one level deep]
# grep "dump" /home/oracle/* | cut -d: -f1 

Search multiple words (release, Finish) inside file names start with rmanlog:
# grep 'release\|Finish' /backupdisk/rmanbkps/rmanlog*

Search lines begin with "oracle" keyword inside file db.log:

# grep '^oracle'  db.log

Search lines end with string ";" inside file db.log:
# grep ';$'  db.log

Search and Replace keyword inside files:
e.g. replace word "sea" with word "ocean" in all files with extension .txt
# sed -i 's/sea/ocean/g' *.txt

Delete Lines contain specific pattern:
e.g. delete all lines that have pattern "DBA_BUNDLE1" from files .bash_profile:
# sed -i '/DBA_BUNDLE1/d' .bash_profile