Introduction to Bash

Introduction to Shells

A shell is a command-line interpreter; a program that takes user commands, interprets them, and passes them to the operating system kernel for execution. It acts as an interface between the user and the OS, allowing users to interact with the system using text-based commands.

What is a Shell’s Role?

The shell’s primary function is to:

  1. Interpret commands: Translate human-readable commands into system calls the kernel understands.

  2. Execute programs: Launch and manage processes based on the given commands.

  3. Provide a programming environment: Support scripting, allowing users to automate tasks and create complex workflows using shell scripts.

Why use a Shell?

  • Automation: Enables scripting for automating repetitive tasks.

  • System Administration: Essential for managing servers, users, and system configurations.

  • Development: Integrates with development tools for compilation, testing, and deployment.

  • Flexibility and Control: Offers more granular control over the system compared to graphical interfaces.

  • Ubiquity: Shells are available across various operating systems.

Types of Shells

While Bash is prevalent, several shell types exist, each with its own features, syntax, and performance characteristics. Here’s a comparison of some common shells:

  1. Bash (Bourne-Again Shell): The most widely used shell, especially in Linux distributions. Known for its comprehensive features, scripting capabilities, and POSIX compliance.

  2. Sh (Bourne Shell): The original shell, developed by Stephen Bourne. More limited than Bash, but still found in older systems and often used as a simplified scripting shell.

  3. Ksh (Korn Shell): An improved version of the Bourne shell, offering features like command-line editing and job control. Influential in the development of POSIX standards.

  4. Zsh (Z Shell): Known for its customizability, plugins, and features like tab completion and spelling correction. A popular choice for power users and developers.

  5. Fish (Friendly Interactive Shell): Focuses on user-friendliness with features like auto-suggestions and syntax highlighting. Not POSIX-compliant, which can limit script portability.

  6. Csh (C Shell) and Tcsh: Shells with a C-like syntax. Less commonly used for scripting but can be preferred by users familiar with C programming.

Key Differences Between Shells

The shells listed above differ in many aspects:

  • Syntax: Shells may have varying syntax for loops (for, while), conditional statements (if, else), variable assignment, and command substitutions. This can affect script portability.

  • Features: Bash, Zsh, and Fish offer more advanced features compared to Sh and Csh, such as command history, tab completion, and plugin support.

  • POSIX Compliance: Bash, Sh, and Ksh are largely POSIX-compliant, ensuring scripts are portable across different Unix-like systems. Fish is intentionally non-POSIX.

  • Interactivity: Zsh and Fish are designed for enhanced interactive use, offering features like auto-suggestions and better tab completion.

  • Performance: Different shells have varying performance characteristics. Generally, simpler shells like Sh execute scripts faster, while shells with more features (Bash, Zsh) might have a slight overhead.

  • Customization: Zsh and Fish are highly customizable, with extensive plugin ecosystems and configuration options.

Choosing a Shell

The choice of shell depends on individual needs and preferences.

  • For general scripting and portability, Bash remains a solid choice.

  • For enhanced interactivity and customization, Zsh or Fish are popular.

  • For scripting in environments where resources are constrained, Sh or Ksh may be suitable.

Understanding the differences between shell types enables you to select the most appropriate shell for a specific task and environment.

Common Shell Commands for Daily Usage

This section will introduce frequently used shell commands for navigating the file system, managing files, and performing basic operations. We’ll focus primarily on commands available in Bash and other POSIX-compliant shells.

Navigation

  1. pwd (Print Working Directory): Displays the absolute path of the current directory.

    pwd
    

    Example Output:

    /home/user/documents
    
  2. cd (Change Directory): Changes the current working directory.

    • cd: Changes to the user’s home directory.

    • cd : Changes to the specified directory.

    • cd …: Moves one directory up (to the parent directory).

    • cd -: Returns to the previous directory.

    cd /var/log
    pwd  # Verify the change
    cd ..
    pwd #Back to /var
    cd -
    pwd #Back to /var/log
    

    Example Output:

    /var/log
    /var
    /var/log
    
  3. ls (List): Lists the files and directories in the current or specified directory.

    • ls: Lists files and directories in the current directory.

    • ls -l: Lists files and directories with detailed information (permissions, size, modification date).

    • ls -a: Lists all files and directories, including hidden ones (starting with .).

    • ls -t: Sorts the list by modification time (most recent first).

    • ls -R: Recursively lists all subdirectories.

    • ls -lh : List files and directories in a human readable format and with detail info

    • ls : Lists the files and directories in the specific directory

    ls
    ls -l
    ls -a
    ls -lh /var/log
    

    Example Output:

    file1.txt  file2.txt  directory1
    total 4
    -rw-r--r-- 1 user user 1024 Oct 26 10:00 file1.txt
    -rw-r--r-- 1 user user 2048 Oct 26 10:05 file2.txt
    drwxr-xr-x 2 user user 4096 Oct 26 09:50 directory1
    .  ..  file1.txt  file2.txt  directory1
    total 1.8M
    -rw-r----- 1 syslog adm 1.2M Oct 27 03:53 auth.log
    -rw-r----- 1 syslog adm  78K Oct 26 00:00 auth.log.1
    -rw-r----- 1 root   adm  33K Oct 26 00:00 boot.log 
    drwxr-xr-x 2 root   root 4.0K Oct 26 09:50 directory1
    

File and Directory Management

  1. mkdir (Make Directory): Creates a new directory.

    mkdir my_new_directory
    ls  # Verify the creation
    

    Example Output:

    file1.txt  file2.txt  directory1 my_new_directory
    
  2. rmdir (Remove Directory): Removes an empty directory.

    rmdir my_new_directory
    ls #Check if the directory removed.
    

    Example Output:

    file1.txt  file2.txt  directory1
    
  3. touch:

    • Creating empty files: Can create multiple empty files simultaneously

    • Updating timestamps: If file exists, it updates the access and modification times

      touch new_file.txt
      touch file1.txt #Access and modification time will update.
      ls -l new_file.txt #You can see the timestamp
      
  4. rm (Remove): Deletes files and directories.

    • rm : Deletes a file.
    • rm -r : Deletes a directory and its contents recursively (use with CAUTION).
    • rm -f : Forces deletion without prompting (use with CAUTION).
    • rm -rf : Forces recursive deletion without prompting (EXTREME CAUTION!).
    rm new_file.txt
    rm -r directory1
    ls #Check if directory and file removed.
    

    Example Output:

    file1.txt  file2.txt
    
  5. cp (Copy): Copies files and directories.

    • cp : Copies a file or directory to a new location.

    • cp -r <source_directory> <destination_directory>: Copies a directory recursively.

      cp file1.txt file3.txt #Copy file1.txt as file3.txt
      cp -r directory1 directory2 #Copy the directory with recursive content
      ls #Check new directory and file exist
      

      Example Output:

      file1.txt  file2.txt directory1 directory2 file3.txt
      
  6. mv (Move): Moves or renames files and directories.

    • mv : Moves or renames a file or directory.

      mv file1.txt moved_file.txt #Rename file1.txt to moved_file.txt
      mv moved_file.txt directory2 #Move the file into directory2
      ls #Check files and directories
      

      Example Output:

      directory1 directory2 file2.txt
      ls directory2
      moved_file.txt
      

File Content and Viewing

  1. cat (Concatenate): Displays the content of a file.

    cat file2.txt
    

    Example Output: (Assuming file2.txt contains “Hello, world!”)

    Hello, world!
    
  2. less: Views the content of a file one page at a time. Useful for large files. Press q to exit.

    less /var/log/syslog
    
  3. head: Displays the first few lines of a file (default is 10 lines).

    • head
    • head -n (specifies the number of lines)
    head -n 5 /var/log/syslog
    
  4. tail: Displays the last few lines of a file (default is 10 lines). Often used for monitoring log files.

    • tail

    • tail -n (specifies the number of lines)

    • tail -f : Follows the file, displaying new lines as they are added (useful for real-time log monitoring).

      tail -f /var/log/syslog
      

Searching

  1. grep (Global Regular Expression Print): Searches for a pattern in a file or input.

    • grep : Searches for the pattern within the specified file.
    • grep -i : Performs a case-insensitive search.
    • grep -v : Displays lines that do not match the pattern.
    grep "error" /var/log/syslog
    grep -i "hello" file2.txt
    

    Example Output: (If /var/log/syslog contains lines with “error”)

    Oct 26 10:15:22 server kernel: [12345.678]  Error: Something went wrong.
    Oct 26 10:20:30 server app[1234]: ERROR: Failed to connect.
    

Other Useful Commands

  1. echo: Displays text. Commonly used for printing variables or messages.

    echo "Hello, world!"
    echo $HOME
    

    Example Output:

    Hello, world!
    /home/user
    
  2. man (Manual): Displays the manual page for a command.

    man ls
    man grep
    
  3. history: Shows a list of previously executed commands.

     history
    
  4. clear: Clears the terminal screen.

    clear
    

Expanding Our Shell Command Toolkit

Building upon the foundation of common commands, we’ll now explore commands for locating files, efficiently searching command history, and determining file sizes.

  1. Finding Files: find

    The find command is an incredibly versatile tool for locating files based on various criteria, such as name, size, modification time, and permissions.

    find < directory> -name < filename>: Finds files with the specified name within a directory.

    find /home/user/documents -name "report.txt"
    

    Example Output (if report.txt exists in /home/user/documents/project1):

    /home/user/documents/project1/report.txt
    

    find < directory> -type f: Finds only files (as opposed to directories).

    find / -type f -name "important.conf" #Find all important config files
    

    find -type d: Finds only directories.

    find /home/user -type d -name "images" #Find all directories named images
    

    find < directory> -size +: Finds files larger than the specified size (size is specified in blocks, kilobytes (k), megabytes (M), or gigabytes (G)).

    find /var/log -size +10M  # Find files larger than 10MB in /var/log
    

    find -mtime -: Finds files modified within the last days days.

    find /home/user/documents -mtime -7  # Find files modified in the last 7 days 
    

    -exec flag: Allows you to execute a command on each file found by find. This is incredibly powerful. The {} represents the filename found. The ; terminates the command.

    find . -name "*.txt" -exec ls -l {} \;  # List details of all .txt files in the current directory
    find . -name "*.txt" -exec rm {} \;      # Delete all .txt files in the current directory (USE WITH.  CAUTION!)
    
  2. Reverse Search in History: Ctrl+r

    Instead of scrolling through your command history with the up and down arrow keys, you can use reverse search to find a previously executed command by typing a part of it.

    1. Press Ctrl+r.
    2. Type a few characters of the command you’re looking for.
    3. The shell will display the most recent matching command from your history.
    4. Press Ctrl+r again to cycle through other matching commands.
    5. Press Enter to execute the found command, or Esc to cancel the search.

    Example:* If you want to re-run a grep command that you used to search for “error” in the syslog, you can press Ctrl+r and type “grep error”. The shell will display the most recent grep command containing those words.

  3. Getting File Sizes:

    ls -l (Long Listing): As we discussed, this lists files with detailed information, including the file size in bytes. However, for larger files, it can be hard to read.

    ls -l myfile.dat
    

    Example Output:

    -rw-r--r-- 1 user user 123456789 Oct 27 11:00 myfile.dat
    

    ls -lh (Long Listing with Human-Readable Sizes): This is the same as ls -l, but displays file sizes in human-readable format (e.g., KB, MB, GB). This is generally preferred.

    ls -lh myfile.dat
    

    Example Output:

    -rw-r--r-- 1 user user 118M Oct 27 11:00 myfile.dat
    

    du (Disk Usage): Estimates file and directory space usage.

    • du : Displays the disk usage of a single file.
    • du -h : Displays the disk usage of a single file in human-readable format.
    • du -sh : Displays the total disk usage of a directory in summary and human-readable format.
    du -sh /home/user/documents
    du -h myfile.dat
    

    Example Output:

    1.2G    /home/user/documents
    118M    myfile.dat
    

    stat (Status): Displays detailed file status information, including size in bytes, access time, modification time, and more.

    stat myfile.dat
    

    Example Output:

    File: myfile.dat
    Size: 123456789         Blocks: 241304     IO Block: 4096   regular file. 
    Device: 801h/2049d        Inode: 327685      Links: 1
    Access: (0644/-rw-r--r--)  Uid: ( 1000/   user)   Gid: ( 1000/   user)
    Access: 2023-10-27 11:00:00.000000000 -0400
    Modify: 2023-10-27 11:00:00.000000000 -0400
    Change: 2023-10-27 11:00:00.000000000 -0400
    Birth: -
    

    Putting it All Together:

    These commands can be combined to perform complex operations. For example:

    “Find all files in /var/log larger than 100MB and list their details:”

     find /var/log -size +100M -exec ls -lh {} \;
    

Advanced Shell Commands and Automation

Text Processing Powerhouses

  • sed (Stream Editor): A powerful non-interactive text editor used for performing substitutions, deletions, insertions, and other transformations on text. It operates on a stream of input, making it ideal for processing files or data from pipes.

    • sed ‘s///g’ : Replaces all occurrences of with in . The g flag indicates global replacement (replace all, not just the first).

    • sed -i ‘s///g’ : Modifies the file in-place. Use with CAUTION! Always back up important files before using -i.

    • sed ‘//d’ : Deletes lines containing .

    • sed ‘1,d’ : Delete first lines

      sed 's/old_text/new_text/g' input.txt > output.txt # Replace all instances and store to a new file
      sed -i 's/  */ /g' input.txt       # Remove double spaces in-place (VERY CAREFUL!)
      sed '/ERROR/d' logfile.txt             # Display logfile.txt, removing lines containing "ERROR"
      sed '1,5d' logfile.txt # Delete first 5 lines
      
  • awk: A programming language designed for text processing. It’s especially useful for extracting and manipulating data from structured text files (like CSV or log files).

    • awk ‘{print $1}’ : Prints the first field (column) of each line in . Fields are typically separated by whitespace.

    • awk -F’,’ ‘{print $2}’ : Prints the second field of each line in , where fields are separated by commas.

    • awk ‘$3 > 100 {print $1, $3}’ : Prints the first and third fields of lines where the third field is greater than 100.

      awk '{print $1}' data.txt  # Print the first column 
      awk -F',' '{print $2}' data.csv # Print the second column where comma is seperator
      awk '$NF ~ /pattern/ {print $0}' file.txt #Print lines where the last field matching a pattern
      

      Example Output (if data.txt contains space-separated data):

       field1_1
       field2_1
       field3_1
      
  • sort: Sorts lines of text.

    • sort : Sorts the lines of alphabetically.

    • sort -n : Sorts numerically.

    • sort -r : Sorts in reverse order.

    • sort -k <column_number> : Sorts based on a specific column.

    • sort -u : Sorts only with unique entries.

      sort names.txt           # Alphabetically sort names
      sort -n numbers.txt          # Sort numerically
      sort -nr numbers.txt #Reverse sort numerically.
      sort -k2 data.txt          # Sort based on the second column
      
  • uniq: Filters out repeated lines. Often used in conjunction with sort.

    • uniq : Removes adjacent duplicate lines from .

    • uniq -c : Counts the number of occurrences of each unique line.

       sort names.txt | uniq  # Remove duplicate names (must be sorted first)
       sort names.txt | uniq -c  # Count the occurrences of each name
      

      Example Output (if names.txt contains “Arjun”, “Bheema”, “Nakula”):

      1 Arjun
      2 Bheema
      3 Nakula
      

Process Management

  • ps (Process Status): Displays information about running processes.

    • ps: Shows processes owned by the current user.

    • ps aux: Shows all running processes with detailed information (user, PID, CPU usage, memory usage, command).

    • ps -ef: Another way to display all running processes in a different format.

      ps aux | less       # View all running processes, one page at a time
      ps -ef | grep firefox   # Find the process ID (PID) of Firefox
      
  • top: Displays a dynamic real-time view of running processes, showing CPU and memory usage. Press q to exit.

    top
    
  • kill: Sends a signal to a process, typically to terminate it.

    • kill : Sends the default TERM (termination) signal to the process with the given PID.

    • kill -9 : Sends the KILL signal, which forcefully terminates the process (use as a last resort). -9 (SIGKILL) should be use as the last resort as it doesnt let the process cleanup before ending.

      ps aux | grep myapp  # Find the PID of "myapp"
      kill 1234               # Terminate process with PID 1234
      kill -9 1234           # Forcefully terminate process with PID 1234
      
  • bg: Put a stopped job into the background.

  • fg: Bring a background job into the foreground.

Shell Scripting and Automation

Shell scripting allows you to combine multiple commands into a single executable file, automating complex tasks.

  • Creating a script: Create a text file (e.g., my_script.sh) with the following structure:

    #!/bin/bash  # Shebang line: specifies the interpreter for the script
    # Comments start with #
    echo "Starting my script..."
    date
    ls -l /var/log | grep "auth.log"
    echo "Script finished."
    
  • Making the script executable:

    chmod +x my_script.sh
    
  • Running the script:

    ./my_script.sh
    
  • Variables: Store values.

     #!/bin/bash
    FILENAME="report.txt"
    DIRECTORY="/home/user/documents"
    echo "Searching for $FILENAME in $DIRECTORY"
    find $DIRECTORY -name "$FILENAME"
    
  • Conditional statements (if, then, else, fi):

    #!/bin/bash
    FILE="myfile.txt"
    if [ -f "$FILE" ]; then
       echo "$FILE exists."
    else
       echo "$FILE does not exist."
    fi
    
  • Loops (for, while):

    #!/bin/bash
    for i in 1 2 3 4 5; do
      echo "Number: $i"
    done
    
     #!/bin/bash
     i=1 
     while [ $i -le 5 ]; do
        echo "Number: $i"
        i=$((i + 1))
     done
    

    Example Script: Log Rotation

    This script rotates a log file by renaming the existing log, creating a new empty log, and compressing the old log.

    #!/bin/bash
    LOG_FILE="/var/log/myapp.log"
    DATE=$(date +%Y%m%d)
    BACKUP_LOG="$LOG_FILE.$DATE.gz"
    
    # Check if log file exists
    if [ ! -f "$LOG_FILE" ]; then
       echo "Log file $LOG_FILE does not exist."
       exit 1
    fi
    
    # Rotate the log
    echo "Rotating log $LOG_FILE..."  
    mv "$LOG_FILE" "$LOG_FILE.$DATE"
    gzip "$LOG_FILE.$DATE"
    touch "$LOG_FILE"
    echo "Log rotation complete. Backup: $BACKUP_LOG"
    

Key Concepts for Automation

  • Cron jobs: Schedule scripts to run automatically at specific times or intervals. Use crontab -e to edit your crontab file.

  • Piping and redirection: Connect commands and redirect input/output to create complex workflows.

  • Error handling: Implement error checking and handling in your scripts to make them more robust.

  • Parameterization: Write scripts that accept command-line arguments to make them more flexible.

Important Considerations

  • Security: Be extremely cautious when writing scripts that modify system files or execute commands with elevated privileges. Always validate your input and sanitize data to prevent security vulnerabilities.

  • Testing: Thoroughly test your scripts before deploying them to production.

  • Documentation: Comment your scripts to explain what they do and how they work.

By mastering these advanced commands and scripting techniques, you can automate a wide range of tasks, improve your efficiency, and become a true shell scripting expert. Remember to practice regularly and experiment with different commands and techniques to expand your skills.