Tuesday, March 29, 2016

ipset: iptables extensions

ipset is a framework that introduces since kernel 2.4.


Installation
To install it in debian jessie
# apt-get install ipset

Creating rules
To create a new set of ipset using TYPENAME hash:net (The hash:net set type uses a hash to store different sized IP network addresses. Network address with zero prefix size cannot be stored in this type of sets.)
# ipset create mynetrules hash:net
or if you want to create set of ipset using TYPENAME hash:ip (The hash:ip set type uses a hash to store IP host addresses (default) or network addresses. Zero valued IP address cannot be stored in a hash:ip type of set.)
# ipset create myiprules hash:ip
or you want to create set of ipset using TYPENAME hash:ip,port (The hash:ip,port set type uses a hash to store IP address and port number pairs. The port number is interpreted together with a protocol (default TCP) and zero protocol number cannot be used.)

# ipset create myipportrules hash:ip,port
"If you want to store same size subnets from a given network (say /24 blocks from a /8 network), use the bitmap:ip set type. If you want to store random same size networks (say random /24 blocks), use the hash:ip set type. If you have got random size of netblocks, use hash:net."

View available ipset rules
To view available ipset rules 
# ipset list

Adding IP into ipset rules
To add single IP into ipset rules:
# ipset add myiprules 183.3.202.105
To add block IP into ipset rules:
# ipset add myiprules 193.201.227.0/24
If host names or service names with dash in the name are used instead of IP addresses or service numbers, then the host name or service name must be enclosed in square brackets. Example:
# ipset add myiprules [u18576666.onlinehome-server.com]

Removing IP from ipset rules
To remove ip or hostname from ipset rules

# ipset del myiprules [u18576666.onlinehome-server.com]
or
# ipset del myiprules 183.3.202.105
or
# ipset del myiprules 193.201.227.0/24

Adding ipset rules into iptables
To add ipset rules into iptables and drop it
# iptables -I INPUT -m set --match-set mynetrules src -j DROP
or if you want just to block connection to ssh 
# iptables -I INPUT  -p tcp --dport 22 -m set --match-set mynetrules src -j DROP
Note: 
-I to insert rules into first line
-A to insert rules into last line
To delete rules, replace -I or -A to -D
How to restore or refresh ipset rules if rule name exist 
To show list of iptables 
# ipset -exist restore < /root/ipset.up.rules
Additional command
To show list of iptables 
# iptables -L
To show list of iptables with line number
# iptables -L --line-numbers
To show list INPUT of iptables 
# iptables -L INPUT
To show list INPUT of iptables with line number
# iptables -L INPUT --line-numbers
To delete line 2 from INPUT
# iptables -D INPUT 2
 
Help
# ipset --help
ipset v6.23

Usage: ipset [options] COMMAND

Commands:
create SETNAME TYPENAME [type-specific-options]
        Create a new set
add SETNAME ENTRY
        Add entry to the named set
del SETNAME ENTRY
        Delete entry from the named set
test SETNAME ENTRY
        Test entry in the named set
destroy [SETNAME]
        Destroy a named set or all sets
list [SETNAME]
        List the entries of a named set or all sets
save [SETNAME]
        Save the named set or all sets to stdout
restore
        Restore a saved state
flush [SETNAME]
        Flush a named set or all sets
rename FROM-SETNAME TO-SETNAME
        Rename two sets
swap FROM-SETNAME TO-SETNAME
        Swap the contect of two existing sets
help [TYPENAME]
        Print help, and settype specific help
version
        Print version information
quit
        Quit interactive mode

Options:
-o plain|save|xml
       Specify output mode for listing sets.
       Default value for "list" command is mode "plain"
       and for "save" command is mode "save".
-s
        Print elements sorted (if supported by the set type).
-q
        Suppress any notice or warning message.
-r
        Try to resolve IP addresses in the output (slow!)
-!
        Ignore errors when creating or adding sets or
        elements that do exist or when deleting elements
        that don't exist.
-n
        When listing, just list setnames from the kernel.

-t
        When listing, list setnames and set headers
        from kernel only.
-f
        Read from the given file instead of standard
        input (restore) or write to given file instead
        of standard output (list/save).

Supported set types:
    list:set            3       skbinfo support
    list:set            2       comment support
    list:set            1       counters support
    list:set            0       Initial revision
    hash:mac            0       Initial revision
    hash:net,iface      6       skbinfo support
    hash:net,iface      5       forceadd support
    hash:net,iface      4       comment support
    hash:net,iface      3       counters support
    hash:net,iface      2       /0 network support
    hash:net,iface      1       nomatch flag support
    hash:net,iface      0       Initial revision
    hash:net,port       7       skbinfo support
    hash:net,port       6       forceadd support
    hash:net,port       5       comment support
    hash:net,port       4       counters support
    hash:net,port       3       nomatch flag support
    hash:net,port       2       Add/del range support
    hash:net,port       1       SCTP and UDPLITE support
    hash:net,port,net   2       skbinfo support
    hash:net,port,net   1       forceadd support
    hash:net,port,net   0       initial revision
    hash:net,net        2       skbinfo support
    hash:net,net        1       forceadd support
    hash:net,net        0       initial revision
    hash:net            6       skbinfo support
    hash:net            5       forceadd support
    hash:net            4       comment support
    hash:net            3       counters support
    hash:net            2       nomatch flag support
    hash:net            1       Add/del range support
    hash:net            0       Initial revision
    hash:ip,port,net    7       skbinfo support
    hash:ip,port,net    6       forceadd support
    hash:ip,port,net    5       comment support
    hash:ip,port,net    4       counters support
    hash:ip,port,net    3       nomatch flag support
    hash:ip,port,net    2       Add/del range support
    hash:ip,port,net    1       SCTP and UDPLITE support
    hash:ip,port,ip     5       skbinfo support
    hash:ip,port,ip     4       forceadd support
    hash:ip,port,ip     3       comment support
    hash:ip,port,ip     2       counters support
    hash:ip,port,ip     1       SCTP and UDPLITE support
    hash:ip,mark        2       sbkinfo support
    hash:ip,mark        1       forceadd support
    hash:ip,mark        0       initial revision
    hash:ip,port        5       skbinfo support
    hash:ip,port        4       forceadd support
    hash:ip,port        3       comment support
    hash:ip,port        2       counters support
    hash:ip,port        1       SCTP and UDPLITE support
    hash:ip             4       skbinfo support
    hash:ip             3       forceadd support
    hash:ip             2       comment support
    hash:ip             1       counters support
    hash:ip             0       Initial revision
    bitmap:port         3       skbinfo support
    bitmap:port         2       comment support
    bitmap:port         1       counters support
    bitmap:port         0       Initial revision
    bitmap:ip,mac       3       skbinfo support
    bitmap:ip,mac       2       comment support
    bitmap:ip,mac       1       counters support
    bitmap:ip,mac       0       Initial revision
    bitmap:ip           3       skbinfo support
    bitmap:ip           2       comment support
    bitmap:ip           1       counters support
    bitmap:ip           0       Initial revision
 
References:

Tuesday, March 22, 2016

GNU grep

grep prints lines that contain a match for a pattern. The general synopsis of the grep command line is grep options pattern input_file_names
There can be zero or more options. pattern will only be seen as such (and not as an input_file_name ) if it wasn’t already specified within options (by using the ‘ -e pattern ’ or ‘ -f file’ options). There can be zero or more input_file_names.

Matching Control

-e pattern
--regexp= pattern
Use pattern as the pattern. This can be used to specify multiple search patterns, or to protect a pattern beginning with a ‘ - ’. (-e is specified by POSIX.)
-f file
--file= file
Obtain patterns from file, one per line. The empty file contains zero patterns, and therefore matches nothing. (-f is specified by POSIX.)
-i
-y
--ignore-case
Ignore case distinctions, so that characters that differ only in case match each other. Although this is straightforward when letters differ in case only via lowercase-uppercase pairs, the behavior is unspecified in other situations. For example, uppercase “S” has an unusual lowercase counterpart “Å¿” (Unicode character U+017F, LATIN SMALL LETTER LONG S) in many locales, and it is unspecified whether this unusual character matches “S” or “s” even though uppercasing it yields “S”. Another example: the lowercase German letter “ß” (U+00DF, LATIN SMALL LETTER SHARP S) is normally capitalized as the two-character string “SS” but it does not match “SS”, and it might not match the uppercase letter “ẞ” (U+1E9E, LATIN CAPITAL LETTER SHARP S) even though lowercasing the latter yields the former.
-y is an obsolete synonym that is provided for compatibility. (-i is specified by POSIX.)
-v
--invert-match
Invert the sense of matching, to select non-matching lines. (-v is specified by POSIX.)
-w
--word-regexp
Select only those lines containing matches that form whole words. The test is that the matching substring must either be at the beginning of the line, or preceded by a non-word constituent character. Similarly, it must be either at the end of the line or followed by a non-word constituent character. Word-constituent characters are letters, digits, and the underscore. This option has no effect if -x is also specified.
-x
--line-regexp
Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ‘ ^ ’ and ‘ $’. (-x is specified by POSIX.) 

Examples: 

to find authentication logs for “root” on an debian system:
# grep "root" /var/log/auth.log
For example, we can see that when someone failed to login as an admin, they also failed the reverse mapping which means they might not have a valid domain name.
# grep -B 3 -A 2 'Invalid user' /var/log/auth.log
To find authentication logs at current system date
# grep  "$(date +%b) $(date +%e)" /var/log/auth.log | grep 'fail\|preauth'
To find authentication logs at current system hour 
# grep  "$(date +%b) $(date +%e) $(date +%H:)" /var/log/auth.log
To find mail logs at current system date
# grep  "$(date +%b) $(date +%e)" /var/log/mail.info
To find mail logs at one hour before current system date
# grep  "$(date --date="1 hours ago" +%b) $(date --date="1 hours ago" +%e)" /var/log/mail.info

A list of date command field descriptors from http://www.cyberciti.biz/faq/unix-linux-bash-get-time/ (as a copy)

       %%     a literal %
 
       %a     locale's abbreviated weekday name (e.g., Sun)
 
       %A     locale's full weekday name (e.g., Sunday)
 
       %b     locale's abbreviated month name (e.g., Jan)
 
       %B     locale's full month name (e.g., January)
 
       %c     locale's date and time (e.g., Thu Mar  3 23:05:25 2005)
 
       %C     century; like %Y, except omit last two digits (e.g., 20)
 
       %d     day of month (e.g., 01)
 
       %D     date; same as %m/%d/%y
 
       %e     day of month, space padded; same as %_d
 
       %F     full date; same as %Y-%m-%d
 
       %g     last two digits of year of ISO week number (see %G)
 
       %G     year of ISO week number (see %V); normally useful only with %V
 
       %h     same as %b
 
       %H     hour (00..23)
 
       %I     hour (01..12)
 
       %j     day of year (001..366)
 
       %k     hour, space padded ( 0..23); same as %_H
 
       %l     hour, space padded ( 1..12); same as %_I
 
       %m     month (01..12)
 
       %M     minute (00..59)
 
       %n     a newline
 
       %N     nanoseconds (000000000..999999999)
       %p     locale's equivalent of either AM or PM; blank if not known
 
       %P     like %p, but lower case
 
       %r     locale's 12-hour clock time (e.g., 11:11:04 PM)
 
       %R     24-hour hour and minute; same as %H:%M
 
       %s     seconds since 1970-01-01 00:00:00 UTC
 
       %S     second (00..60)
 
       %t     a tab
 
       %T     time; same as %H:%M:%S
 
       %u     day of week (1..7); 1 is Monday
 
       %U     week number of year, with Sunday as first day of week (00..53)
 
       %V     ISO week number, with Monday as first day of week (01..53)
 
       %w     day of week (0..6); 0 is Sunday
 
       %W     week number of year, with Monday as first day of week (00..53)
 
       %x     locale's date representation (e.g., 12/31/99)
 
       %X     locale's time representation (e.g., 23:13:48)
 
       %y     last two digits of year (00..99)
 
       %Y     year
 
       %z     +hhmm numeric time zone (e.g., -0400)
 
       %:z    +hh:mm numeric time zone (e.g., -04:00)
 
       %::z   +hh:mm:ss numeric time zone (e.g., -04:00:00)
 
       %:::z  numeric time zone with :  to  necessary  precision  (e.g.,  -04,
              +05:30)
 
       %Z     alphabetic time zone abbreviation (e.g., EDT)
       By  default,  date  pads  numeric  fields  with  zeroes.  The following
       optional flags may follow '%':
 
       -      (hyphen) do not pad the field
 
       _      (underscore) pad with spaces
 
       0      (zero) pad with zeros
 
       ^      use upper case if possible
 
       #      use opposite case if possible

References:

Core util: cat, head, tail, sort, uniq and cut

cat
cat copies each file (‘ -’ means standard input), or standard input if none are given, to standard output.
Synopsis:
cat [ option ] [ file]…

head
head prints the first part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of - .
Synopsis:
head [ option ]… [ file]…
# head /var/log/auth.log

tail
tail prints the last part (10 lines by default) of each file; it reads from standard input if no files are given or when given a file of ‘ -’.
Synopsis:
tail [ option ]… [ file]…
# tail /var/log/auth.log

sort
sort sorts, merges, or compares all the lines from the given files, or standard input if none are given or for a file of ‘ - ’. By default, sort writes the results to standard output.
Synopsis:
sort [ option ]… [ file]…
options:
‘ -n ’
‘ --numeric-sort ’
‘ --sort=numeric ’
Sort numerically. The number begins each line and consists of optional blanks, an optional ‘ -’ sign, and zero or more digits possibly separated by thousands separators, optionally followed by a decimal-point character and zero or more digits. An empty number is treated as ‘ 0 ’. The LC_NUMERIC locale specifies the decimal-point character and thousands separator. By default a blank is a space or a tab, but the LC_CTYPE locale can change this.

uniq
uniq writes the unique lines in the given
input , or standard input if nothing is given or for an input name of ‘ -’.
Synopsis:
uniq [ option ]… [ input [output ]]
options:
‘ -c ’
‘ --count ’
Print the number of times each line occurred along with the line.

cut
cut writes to standard output selected parts of each line of each input file, or standard input if no files are given or for a file name of ‘ - ’.
Synopsis:
cut option… [ file]…
Options:
‘ -d input_delim_byte ’
‘ --delimiter= input_delim_byte ’
With -f, use the first byte of
input_delim_byte as the input fields separator (default is TAB).
‘ -f field-list ’
‘ --fields= field-list ’
Select for printing only the fields listed in field-list . Fields are separated by a TAB character by default. Also print any line that contains no delimiter character, unless the --only-delimited (-s) option is specified.
Note awk supports more sophisticated field processing, and by default will use (and discard) runs of blank characters to separate fields, and ignore leading and trailing blanks.
awk '{print $2}'    # print the second field
awk '{print $NF-1}' # print the penultimate field
awk '{print $2,$1}' # reorder the first two fields
In the unlikely event that awk is unavailable, one can use the join command, to process blank characters as
awk does above.
join -a1 -o 1.2     - /dev/null # print the second field
join -a1 -o 1.2,1.1 - /dev/null # reorder the first two fields
Example: a quick way to see which IP addresses are most active is to sort by them:
# cat access.log |cut -d ' ' -f 1 |sort

UPDATE: even easier: the uniq command has a -c argument that does most of this work automatically. It counts the occurrences of each unique line. Then a quick sort -n and a tail shows the big ones. Also, I tend to use "cut" as above, but one of the Dreamhost guys reminded me that awk may be a little more straightforward:
# cat /path/to/access.log |awk '{print $1}' |sort  |uniq -c |sort -n |tail

References:

  • https://www.gnu.org/software/coreutils/manual/html_node/index.html
  • https://encodable.com/tech/blog/2008/12/17/Count_IP_Addresses_in_Access_Log_File_BASH_OneLiner

Monday, March 21, 2016

Virtualmin creating Sub-Server for Sub-Domain

These are steps to create Sub-Server for Sub-Domain:
  1. Virtualmin -> Create Virtual Server
  2. Fill Domain name for sub-domain for example subdom.domain.com (domain.com is root domain, change with your root domain) 
  3. In Enabled features check these items:
    1. Setup DNS zone?
    2. Setup website for domain?
    3. Setup SSL website too?
    4. Choose one of these Setup Webalizer for web logs? or Enable AWstats reporting? (optional)
    5. Choose other options depend on your requirement

Friday, March 18, 2016

Linux awk

Awk can do most things that are actually text processing.
An awk program follows the form:
pattern { action }
awk is line oriented. That is, the pattern specifies a test that is performed with each line read as input. If the condition is true, then the action is taken. The default pattern is something that matches every line. This is the blank or null pattern.
awk program below:
BEGIN { print "START" }       { print         } END   { print "STOP"  }
Example:
BEGIN { print "File\tOwner"}{ print $8, "\t", $3}END { print " - DONE -" }
Example awk_example1.awk
#!/bin/awk -f BEGIN { print "File\tOwner" }{ print $8, "\t", $3}END { print " - DONE -" }
In its simplest usage awk is meant for processing column-oriented text data, such as tables, presented to it on standard input. The variables $1, $2, and so forth are the contents of the first, second, etc. column of the current input line. For example, to print the second column of a file, you might use the following simple awk script:
awk < file '{ print $2 }'
This means "on every line, print the second field". 
By default awk splits input lines into fields based on whitespace, that is, spaces and tabs. You can change this by using the -F option to awk and supplying another character. For instance, to print the home directories of all users on the system, you might do
awk < /etc/passwd -F: '{ print $6 }'
since the password file has fields delimited by colons and the home directory is the 6th field. 
Awk is a weakly typed language; variables can be either strings or numbers, depending on how they're referenced. All numbers are floating-point. So to implement the fahrenheit-to-celsius calculator, you might write
awk '{ print ($1-32)*(5/9) }'
which will convert fahrenheit temperatures provided on standard input to celsius until it gets an end-of-file.  
echo 5 4 | awk '{ print $1 + $2 }'prints 9, whileecho 5 4 | awk '{ print $1 $2 }'prints 54. Note thatecho 5 4 | awk '{ print $1, $2 }'prints "5 4".
awk has some built-in variables that are automatically set; $1 and so on are examples of these. The other builtin variables that are useful for beginners are generally NF, which holds the number of fields in the current input line ($NF gives the last field), and $0, which holds the entire current input line. 
You can make your own variables, with whatever names you like (except for reserved words in the awk language) just by using them. You do not have to declare variables. Variables that haven't been explicitly set to anything have the value "" as strings and 0 as numbers.
For example, the following code prints the average of all the numbers on each line:
awk '{ tot=0; for (i=1; i<=NF; i++) tot += $i; print tot/NF; }'awk '{ tot += $1; n += 1; }  END { print tot/n; }'
Note the use of two different block statements. The second one has END in front of it; this means to run the block once after all input has been processed. 
You can also supply regular expressions to match the whole line against:
awk ' /^test/ { print $2 }'
The block conditions BEGIN and END are special and are run before processing any input, and after processing all input, respectively. 
awk supports loop and conditional statements like in C, that is, for, while, do/while, if, and if/else.
awk '{ for (i=2; i<=NF; i++) printf "%s ", $i; printf "\n"; }'
Note the use of NF to iterate over all the fields and the use of printf to place newlines explicitly. 
finding everything within the last 2 hours:
awk -vDate=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date {print Date, $0}' access_log
Note: date is stored in field 4
To find something between 2-4 hrs ago:
awk -vDate=`date -d'now-4 hours' +[%d/%b/%Y:%H:%M:%S` -vDate2=`date -d'now-2 hours' +[%d/%b/%Y:%H:%M:%S` '$4 > Date && $4 < Date2 {print Date, Date2, $4} access_log'
The following will show you the IPs of every user who requests the index page sorted by the number of hits:
awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ }    END { for (i in ipcount) {        printf "%15s - %d\n", i, ipcount[i] } }' logfile.log
$7 is the requested url. You can add whatever conditions you want at the beginning. Replace the '$7 == "/" with whatever information you want.
If you replace the $1 in (ipcount[$1]++), then you can group the results by other criteria. Using $7 would show what pages were accessed and how often. Of course then you would want to change the condition at the beginning. The following would show what pages were accessed by a user from a specific IP:
awk -F'[ "]+' '$1 == "1.2.3.4" { pagecount[$7]++ }    END { for (i in pagecount) {        printf "%15s - %d\n", i, pagecount[i] } }' logfile.log
You can also pipe the output through sort to get the results in order, either as part of the shell command, or also in the awk script itself:
awk -F'[ "]+' '$7 == "/" { ipcount[$1]++ }    END { for (i in ipcount) {        printf "%15s - %d\n", i, ipcount[i] | sort } }' logfile.log
Example how to remove duplicate lines in text file
awk '!x[$0]++' [text_file_name]
There are only a few commands in AWK. The list and syntax follows:
  • if ( conditional ) statement [ else
  • statement ]
  • while ( conditional ) statement
  • for ( expression ; conditional ; expression )
  • statement
  • for ( variable in array ) statement
  • break
  • continue
  • { [ statement ] ...}
  • variable = expression
  • print [ expression-list ] [ > expression ]
  • printf format [ , expression-list ] [ >
  • expression ]
  • next
  • exit
Example:
#!/bin/awk -f BEGIN {# Print the squares from 1 to 10 the first wayi=1;while (i <= 10) {printf "The square of ", i, " is ", i*i;i = i+1;}# do it again, using more concise codefor (i=1; i <= 10; i++) {printf "The square of ", i, " is ", i*i;}# now end exit; }
abbreviation:
  • NF : number of field 
  • NR : number of record
  • FS : field ssparator
  • RS: record separator FS="\n"
  • ORS : output record separator ORS="\r\n"
  • FILENAME 
http://dedetoknotes.blogspot.co.id/2016/03/linux-awk.html
References:  
  • http://www.hcs.harvard.edu/~dholland/computers/awk.html
  • http://stackoverflow.com/questions/7706095/find-entries-in-log-file-within-timespan-eg-the-last-hour
  • http://www.grymoire.com/Unix/Awk.html
  • http://serverfault.com/questions/11028/do-you-have-any-useful-awk-and-grep-scripts-for-parsing-apache-logs 
  • http://stackoverflow.com/questions/11532157/unix-removing-duplicate-lines-without-sorting