Linux commands I use often - Quick-reference

I've been using Linux for ~25yrs. After a while, most of these commands become second nature, so one takes for granted how much it is, but it probably seems a lot to learn to a beginner.

I'm just writing up some of the commands I use often, in case this might be useful to someone.


Note: Linux and other Unix-y systems use forward-slash (not backslash as in Windows) as a file/path separator. "." denotes the current directory, and ".." means "parent directory" (i.e. one folder up relative to current path).

Also note that by convention, files and folders starting with "." are regarded as hidden in Linux/Unix. This is different to Windows, where 'hidden' is a special file attribute.

See my Linux Tutorial for more basics.

man ...

Get help on a given command ('man' is short for 'manual')

You can type "man" followed by any command on this page, and it will show you more detailed information on that command, e.g. "man uname".

Press 'q' to exit 'man'.

Note that in Linux/Unix, many commands also may give help if you pass in "--help" or "-h" as parameter, e.g. "mysql -h" or "mysql -h|more"


Show the current directory you're in.


Change directory

cd ..

Change to parent directory of current directory


Show the current username you're logged in as.


List files and folders in the current directory.

ls -la

List files and folders in the current directory in long detailed format.

ls -lart

List files and folders in the current directory in long detailed format, sorted by modification date and time.


Use this to paginate output, i.e. to break up the output of other commands (e.g. file listings) into 'pages' and show them one page at a time, pausing between each page, e.g.:

ls -la | more

Use 'q' to quit, or 'space' to show next page.


Similar idea to 'more'. This can also take a filename as parameter in addition to accepting 'piped input'.

cat ...

Show the contents of the given file(s) passed as parameter(s) (or fed in as input via a pipe). Short for "concatenate".


Copy files, e.g. "cp *.html ../backups"

cp -r

Copy folders recursively


Create directory. Use mkdir -p to suppress error if directory already exists.


Move or rename files/folders.


Remove files. Warning, this command is dangerous. Use "-r" if removing a folder.

rm -r

Remove files or folders, recursively. Warning, this command is dangerous.

rm -rf

WARNING: This command is dangerous. Remove files/folders recursively, forcing removal without asking if you're sure.

If you run e.g. 'rm -rf /' while superuser i.e. 'root', it will immediately start deleting everything on your entire system.

echo ...

Repeat ('echo' back to you) what you type after the echo command. Used mainly in scripts to generate output.

Disk usage

df -h

List mounted drives, and show disk usage and available space on the drives.

Also useful for seeing the device (e.g. "/dev/sdc1") of a mounted drive.

du -sch *

Show disk space usage of files/folders in current directory, in human-readable format

du -sck * | sort -n

Show disk space usage of files/folders in current directory, sorted by total disk space used.

Terminal-based text editors


These days I mostly use 'nano' to edit text files while in a shell/terminal. I used to use 'joe' more often, but it seems 'joe' is now rarely installed by default on modern distros.

Relatively more user-friendly than editors like 'vi'.


Relatively more user-friendly than editors like 'vi'.


Commonly used text editor.

Occasionally one finds oneself in situations where one must edit a file in 'vi', so it's good to at least know the basics, even if you don't use it as your main editor.

System Monitoring


Show CPU and memory usage of running processes (e.g. to see what's mostly keeping the CPU busy).


A newer application similar to 'top', but showing some slightly more advanced info.

ps fax (or "ps ax" on some systems where the "f" option is unavailable)

List running processes.

/sbin/ifconfig (now replaced by "ip address" on some newer systems)

Show network adapters and e.g. IP address information.

netstat -a

Show open network connections and other related info.

Process Management


Terminate a running process. Requires the ID (use 'ps' to find the ID).

nice / renice

Change the priority of processes. To be honest, I don't use this often, but occasionally it's useful.


Bring a process back to the foreground that you suspended (this is more often by accident than on purpose, e.g. if you're in "nano" and hit "Ctrl+Z" by mistake, then it looks like you've exited nano but in fact it's still running in the background - you use "fg" to bring it back to the foreground.)

System Information

uname -a

Show the system name and other basic architecture information, e.g. "Linux pi 4.19.118-v7l+ #1311 SMP Mon Apr 27 14:26:42 BST 2020 armv7l GNU/Linux"

cat /etc/fstab

Configuration of drives to be mounted on system.

cat /etc/mtab

Show mounted drives, including 'virtual' drives.


Show unique IDs of drives.

cat /proc/cpuinfo | more

Show information about the system processor (CPU).

GUI (Graphical User Interface) Text Editors


GUI text editor I sometimes use.

Text/Terminal File Managers


Midnight Commander: Dual-panel style file manager similar to the famous Norton Commander (and Total Commander on Windows). I used to use 'mc' a lot but now rarely use it.

GUI (Graphical User Interface) File Managers


Dual-panel style file manager similar to the famous Norton Commander (and Total Commander on Windows).

IDEs (Integrated Development Environment)


Fairly decent development environment I sometimes use for e.g. C++ projects.

Software Development


Commonly used application for building software projects.


Commonly used source code repository and revision control system.


Another commonly used source code repository and revision control system. Used to be more popular than git, but was overtaken by git as git is more powerful.


Mercurial, another now less commonly used source code repository and revision control system, similar to git.

Package Managers

I still regard packagers as a relatively "new" development in the Linux world, but I guess "new" is a relative term.


Packager manager found on many Linux systems, e.g. Debian and Debian-based Linux systems.

apt install ... (or "sudo apt install ...")

Install a package. E.g. "sudo apt install mc" can be used to install Midnight Commander. It will tell you if the package is unavailable, and will ask 'are you sure' before installing if it is available.

Find out which package contains a given file

You can use "dpkg --search ..." (or "dpkg -S ...")

Shell tips: Keyboard shortcuts

Tab Filename auto-completion

Ctrl+R Repeat a command from your recent command history

Working with tar, gz, 7z, zip files etc.

gzip / gunzip

Commonly used file compression utility with ".gz" file extension.

A ".tar" file (or "tarball") is a (single) file just containing other files and folders, bundled into a single file. This is typically used for tasks like distributing software, or making backups, or archiving files. The .tar itself is not compressed, but is typically compressed by other software like "gzip" (.gz extension) to create e.g. a ".tar.gz" file.

tar -xvzf ...

Decompress (-z) and extract (-x) a given file in the common double-barrel format ".tar.gz" (a ".tar" file, compressed with "gzip")

tar -xvf ...

Decompress and extract a given file in the ".tar" format.

tar -c MyDirectoryToTar > MyDirectoryToTar.tar

Create a .tar file of a directory on your hard disk.

tar -c MyDirectoryToTar > MyDirectoryToTar.tar && gzip --best MyDirectoryToTar.tar

Create a .tar file of a directory on your hard disk and if successful then compress it, with maximal gzip compression ("--best" increases gz compression but will take longer to compress and use more CPU). You should end up with a file named "MyDirectoryToTar.tar.gz" here.

Other compression utilities: unzip, 7z/7za, bunzip2

Remote File Copying

scp; ftp; rsync

These are different ways of transferring files locally or remotely. NB: ftp is old and NOT secure, but sometimes the only thing available for file transferral, so it's useful to know the basics (e.g. 'bin' 'hash' 'prompt' etc.). "scp" is secure over ssh, and rsync is generally secure if used 'over ssh'.

Remote Access


Create a remotely-accessible desktop session, that can be viewed with a VNC viewer. One nice thing with Linux is you can create many separate 'virtual desktop' sessions - and e.g. different users on the same system can be running their own separate VNC desktops. They'll be accessible via a different port.


This is used for secure access to remote systems, as well as secure file transfer (e.g. using commands like 'scp' over ssh).

Other (insecure) protocols can also be "tunnelled" through/over SSH to wrap them in a secure layer.

Piping: Basics

The output of a Linux/Unix command can be fed in as the input to a next command. This is called 'piping', or redirecting output.

It's called this as it uses the "pipe" symbol, "|".

For example, say we want to pause between each pageful of information generated by the output of "ls -la" (list files in long detailed format). We can do so by redirecting, or 'piping' its output as follows:

ls -la | more

This can be used for many things, e.g. we can search the output of "ls -la" for some text by using "grep", e.g.:

ls -la | grep html

These can also be written without spaces, like so ... it looks less readable, e.g. in scripts, but is less typing ... I tend to always put spaces, out of habit:

ls -la|grep html

We can also feed the content of a file into a command using the "<" redirect, e.g. like so:

grep -i linux < file.txt


More info to be added here later.

Press Ctrl+C to exit if you accidentally just entered 'xargs' on its own.

xargs example to search in files for some text (non-case-sensitive)

find . -type f -name "*.html" | xargs grep -i sometext

This searches recursively in the current folder for .html files, and searches these files for "sometext".

For case-sensitive search remove the "-i" argument from grep.

Search in files for some text, pausing on each page of results:

find . -type f -name "*.html" | xargs grep -i sometext | more

This searches recursively in the current folder for .html files, and searches these files for "sometext", pausing on each page of results.

One caveat, filenames with spaces in them may cause issues - in that case, try "-print0" and "-0" (or some-such):

find . -type f -name "*.html" -print0 | xargs -0 grep -i sometext

[ADVANCED] mysql

Import a dumpfile into a MySQL database:

mysql -p -u username databasename < dumpfile

mysql -p -h server -u username databasename < dumpfile

Import a dumpfile into a MySQL database. If no server/hostname specific ("-h"), assumes 'localhost'.

Create a mysql dumpfile:

mysqldump -p -h server -u username databasename [tables] > dumpfile

If you get an error about COLUMN_STATISTICS, add "--column-statistics=0".

[ADVANCED] Git-fu: Slicing and Dicing git repos

Most the below should not really be commands you use often unless you're a git repo maintainer or something.

I'm just adding these here for my own copy-and-paste reference, having now done a lot of 'slicing and dicing' of git repositories (e.g. converting and splitting large old subversion into multiple small git repos, deleting content from git histories, and so on).

NB 1: ALWAYS, ALWAYS, ALWAYS completely backup and clone your repository before trying any of these, and then make a clone, try out any of these only on the clone. Only after you're sure it's doing what you want, can you run these.

NB 2: If you run git commands that remove files from history on actual live public github projects that others have forked etc., and then you 'force-push', you may wreck their forks/branches etc., so be careful. These are more useful for private repos where you know 'who has used it for what'.

NB 3: Some of these basically only work properly on Linux. Some don't work on macOS or Windows. Trust me, I've tried. So if you must do repository manipulation like this, best to set up a Linux sytem or virtual machine where you can do it. Some of these may work fine on other platforms, but not all, or may need slight adapting.

Show list of all authors in git history, by contribution count

git shortlog -s -n -e

Keep only specific subfolder in a repo, completely deleting everything else

Make sure to back up all the other folders if you want to keep them as separate repos before doing this.

git filter-branch --prune-empty --subdirectory-filter FOLDER-NAME [BRANCH-NAME]


git filter-branch --prune-empty --subdirectory-filter Source/ProjectIWantToKeep [BRANCH-NAME]

NB, after doing this, the files and folders inside the "Source/ProjectIWantToKeep" subfolder will now be the TOP-LEVEL files/folders of the repo.

Keep only specific multiple subfolders in a repo, completely deleting everything else

Similar to the previous one, but allows you to keep 2 or more subfolders, but still deleting everything else. E.g.:

git filter-branch --index-filter 'git rm --cached -qr --ignore-unmatch -- . && git reset -q $GIT_COMMIT -- Source/ProjectIWantToKeep1 web/ProjectIWantToKeep2 Source/ProjectIWantToKeep3' --prune-empty -- --all

This may retain subfolder hierarchy, as one can select multiple subfolders.

Remove specific file/folder(s) by name or wildcard from git history

git filter-branch --force --index-filter  "git rm --cached --ignore-unmatch SomeUndesiredContentPath/*" --prune-empty --tag-name-filter cat -- --all

Note, after running this, you may need to clone and run the 'reflog expire' command below, and possibly clone again to see the content fully fully removed.

List of 100 largest files in your git respository, including over their full history

git rev-list master | while read rev; do git ls-tree -lr $rev | cut -c54- | sed -r 's/^ +//g;'; done | sort -u | perl -e 'while (<>) { chomp; @stuff=split("\t");$sums{$stuff[1]} += $stuff[0];} print "$sums{$_} $_\n" for (keys %sums);' | sort -rn | head -n 100

List of 1000 largest files in your git respository, redirected to a file

git rev-list master | while read rev; do git ls-tree -lr $rev | cut -c54- | sed -r 's/^ +//g;'; done | sort -u | perl -e 'while (<>) { chomp; @stuff=split("\t");$sums{$stuff[1]} += $stuff[0];} print "$sums{$_} $_\n" for (keys %sums);' | sort -rn | head -n 1000 > TOP1000.txt

I don't know who sat and came up with this cool but complex little one-liner, but it creates a file, e.g. "TOP1000.txt" that will contain a list, sorted by largest objects first, in your git repo history.

You can then use e.g. 'cat' or 'less' or 'more' or 'nano' etc. to analyze, and decide if specific large objects should go etc.

This one probably only works on Linux, I think.

Force actual removal of removed objects from git history

Use this after e.g. 'truly' removing content from history as per above that you no longer want in a repo. Sometimes it takes an extra clone step before or after this also to see actual actual reduction.

git reflog expire --expire=now --all && git gc --prune=now --aggressive