PFS PROCESS FOLLOW SCRIPT & MONITOR COMMANDS (no config, no extra programs needed) – Remove Duplicates without sorting, like uniq – sorting ps output

Monitoring Commands

(top program for this)

Click here to see my new discovery (thanks to a user on reddit for helping me find this)

SNOOPY

(top 2 scripts first)

Ever wonder what are all of the commands ran on your unit in a given time period, or during a certain operation. Lets say you have a linux server and you click something on a GUI application (that runs on that server, maybe its some sort of management interface that initiates backups) and ever wonder, for example, “when I click this backup button, what does it actually do in the linux back end? I mean it says it does an rsync, but what are all of the switches and options that it actually runs with that rsync command“. Well if that GUI app runs a linux command, with this you can find out what it was.

This article is really long, and has many commands with different variations of output. I selected the best 2 commands (the ones I feel will be used the most). Both have a set of cons and pros.

At top I like to point out the winners (the best commands, I feel will get used the most) so if your using this for a reference you dont have to scroll much. Here is the ones that will get used the most, because they have the best resolution (they might not give the most interesting output at first, but the analytical aspect here is the best.)

Use either command1 or command2 based on if you need to capture short running commands or slow running commands. short running commands like ls need higher resolution capture like that of command2. Slow commands like copys or moves or rsyncs or daemons can be captured with command1. Command2 will capture the most commands but the output is sorted and cant be followed.

(command1) The winner from PFS is Tee Version (OUTPUT: output is on screen and saved to file with timestamp - TO USE: just copy paste into shell and press enter or make a script out of it and run that, while its running run the operations that you want to see the commands of, they will appear on the screen – note this captures like 90% of the commands that it runs – by 90% i just mean not all of the commands, some quick commands get passed):

PRO: scripts output can be followed using a program like tail -f, with the most recent ran commands appearing at the bottom of the list. In fact just running the script below will show the output in a tail -f fashion so that you dont have to actually run another tail -f command. So just copy paste the script below into a script and hit enter.

CON: Slower resolution, might miss a few short running commands like ls, netstat etc.

(command2) The winner from MONITORING MOST COMMANDS section (OUTPUT: saves to file all-comm.txt, TO USE: copy paste and press enter or run as script, then do the operations you want to record the commands of, after you think those commands are done, then close out of the monitoring script with Control-C):

PRO: high resolution captures most of the fast commands (maybe even all). Can capture commands like ls and cat etc.

CON: script output is sorted so cant follow output with a command like tail -f (since the most recent commands aren’t appeneded to the bottom, they are sorted thru the entirety of the file, so a follow type of command like tail -f wouldnt make sense and it wouldnt work)

You can read the rest of the article for other variations and explanations etc.

tl;dr from here on out

PFS

PROCESS FOLLOW

Audit commands that are by the system (although you cant see the user, but you can just copy paste that script into a shell and follow all of the commands)

This will generate a list of all the processes, keep doing that continously, but only adding new processes. You can follow the output file (psf.txt, you can change the name of that file by editing the value of $OUT1) with tail -f

Can make a script with the below program and run it like this:

START SCRIPT:

NOTE: Output saves to psf.txt

OTHER START METHOD:
Also You can just copy paste the script and you will see it

You can follow the resulting psf.txt with

FOLLOW RESULTS:

Turning off psf can be done with kill or kill -9 on the PID or you can control-C it as well. You can restart the psf.sh command and it will use the existing results as start point. If you want to clear the results, do this first

CLEARING RESULTS:

 

NOTE: all results dump to current directory

NOTE: I had to do a hack because grep has a bug/feature, when you feed the following input to it it thinks its an argument and freezes: grep “-bash” File. Instead you need to look for it like this: grep “\-bash” File. I also had to save the contents to the file like so \-bash instead of -bash so that the next time it sees it, it acknowledges its existance. If instead I appended -bash instead of \-bash then when the grep for \-bash runs we would have issues.

NOTE: I could do regular or extended grap, or word grep you would get out of range issues because so program names might have characters that would mean grep to run sometihng. So the best option is to run grep with -F

NOTE: some variables are called like this $VAR and some like this ${VAR} why the difference? Lazyness. Whocares, same result.

Try all of the methods below, most require tailing the result file in another shell to see result and the same time different statistics are printed on the screen (the more statistics, the slower the resolution). The first one called TEE VERSION doesnt require tailing, just run it, and all of the results can be seen from same shell.

TEE VERSION – BEST SIMPLE VERSION
==============================

 

VISUAL VERSION
===============

This version of the script has visual output

S = Start
F = Finish
. = when a process is checked that is already listed
+ = when a new process is found that is appended to list

EXTRA VISUAL (MORE INFO, PER LOOP)
===================================
This version of the script has visual output

S = Start
F = Finish
. = when a process is checked that is already listed
+ = when a new process is found that is appended to list

Also the number of proccesses found on that loop (total number scanned) are listed are the scan

 

NOW VISUAL – fastest resolution (once everything is processed you know)
========================================================================

When you write less to screen each while loop can go thru faster

 

NOW VISUAL – fastest resolution – blank
========================================

When you write less to screen each while loop can go thru faster


 

Below is older method with which you couldnt follow the output with tail, because it sorted the output among other limitations.

So the above is by far the best.


 

MONITOR MOST COMMANDS RAN BY THE SYSTEM

First off useful links – these links do the same thing as i am planning to do but they involve changing configs, or installing programs, where as mine is just a monitoring command that is run using the system tools already available to us:

Save every session to a hidden script/typescript file:
http://linuxers.org/article/script-command-line-tool-recordsave-your-terminal-activity

Modify the way hist control saves:
http://administratosphere.wordpress.com/2011/05/20/logging-every-shell-command/

Save each command to syslog cmdlog:
http://blog.kxr.me/2012/01/logging-shell-commands-in-linux.html

Same idea:
http://askubuntu.com/questions/93566/how-to-log-all-bash-commands-by-all-users-on-a-server

More:
http://serverfault.com/questions/336217/how-do-i-log-every-command-executed-by-a-user

Command audit program:
http://www.cyberciti.biz/tips/howto-log-user-activity-using-process-accounting.html

Now time for my program – it differs in that you can run it at anytime you dont need to setup an environtment for it, or install any commands, or change any config files

The goal of this script is to generate a log of every command ran (no timestamps unfortunatly, this is a simple version). All logs will be saved into one file here denoted by the variable OUT1 (you can change it if you want). Why do I say most commands are recorded and not all? Because the ps, process, snapshots occur on a timely basis and its possible its taken during a time when a program didnt run. Example Snapshot happens, Program 400 runs and quit, then another snapshot happens missing the fact that program 400 ran.

This gives every process info

This just list the command part (so we dont have to awk)

If we want to sort the output (not yet used in the main working model below)

or

The rest is simple programming.

We keep appending to a file, and always sorting that file. And always running uniq on the output file so that it doesnt grow infinitely big.

FIRST ATTEMPT (SORTED, AND UNFOLLOWABLE) 

Note: Can change where file saves by changing OUT1 variable. Currently saves to current directory file name “all-comm.txt”

The output also shows you how big that file is in Lines/Words and Characters(bytes). Run any of these commands and when done monitoring press control-C. Look thru the file with vi,cat or whatever (grep if you want to)

WITH WATCH COMMAND (watch can be used like a while loop to repeat a command[s]):

WITHOUT WATCH USING WHILE LOOP INSTEAD:

Look thru results with vi, or cat or grep and cat them.

Example: To see all of the rsync commands that where run on the system during monitoring (note you can run this while the monitoring is still happening):

Update Best way to monitor with above commands

Since we only have a tenth of a second window to catch all of the programs, lets lower that, and also lets not output extra info on screen that we dont need to as that takes extra time. Lets also just use the simpler program so not watch but a while loop. The simpler the faster, the better the resolution the more accurate the results.

 


 EXTRA NOTES

More to come… (UPDATE, it just came read PFS above)

Make the output followable:

Problem with above script is that to remove duplicates one must sort, so you cant follow the script anymore:

Sorting ps output

In Linux and SysV5:

In Linux:

In MACs OSX:

What I need:

Perhaps something like this (currently this is werid):

In another shell:

 Deduplicate scripts thanks to: http://stackoverflow.com/questions/11532157/unix-removing-duplicate-lines-without-sorting

Using this one awk ' !x[$0]++'  by Michael Hoffman.

Below are excerpts from link, i might use to make above scripts better:

Maybe can use different deduplication scripts:
Michael Hoffman’s solution above is short and sweet. For larger files, a Schwartzian transform approach involving the addition of an index field using awk followed by multiple rounds of sort and uniq involves less memory overhead. The following snippet works in bash

 Thanks 1_CR! I needed a “uniq -u” (remove duplicates entirely) rather than uniq (leave 1 copy of duplicates). The awk and perl solutions can’t really be modified to do this, your’s can! I may have also needed the lower memory use since I will be uniq’ing like 100,000,000 lines 8-). Just in case anyone else needs it, I just put a “-u” in the uniq portion of the command:

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code class="" title="" data-url=""> <del datetime=""> <em> <i> <q cite=""> <strike> <strong> <pre class="" title="" data-url=""> <span class="" title="" data-url="">