Most powerful examples of Unix commands or scripts

2019-03-07 10:23发布

There are many things that all programmers should know, but I am particularly interested in the Unix/Linux commands that we should all know. For accomplishing tasks that we may come up against at some point such as refactoring, reporting, network updates etc.

The reason I am curious is because having previously worked as a software tester at a software company while I am studying my degree, I noticed that all of developers (who were developing Windows software) had 2 computers.

To their left was their Windows XP development machine, and to the right was a Linux box. I think it was Ubuntu. Anyway they told me that they used it because it provided powerful unix operations that Windows couldn't do in their development process.

This makes me curious to know, as a software engineer what do you believe are some of the most powerful scripts/commands/uses that you can perform on a Unix/Linux operating system that every programmer should know for solving real world tasks that may not necessarily relate to writing code?

We all know what sed, awk and grep do. I am interested in some actual Unix/Linux scripting pieces that have solved a difficult problem for you, so that other programmers may benefit. Please provide your story and source.

I am sure there are numerous examples like this that people keep in their 'Scripts' folder.

Update: People seem to be misinterpreting the question. I am not asking for the names of individual unix commands, rather UNIX code snippets that have solved a problem for you.

Best answers from the Community


Traverse a directory tree and print out paths to any files that match a regular expression:

find . -exec grep -l -e 'myregex' {} \; >> outfile.txt 

Invoke the default editor(Nano/ViM)

(works on most Unix systems including Mac OS X) Default editor is whatever your "EDITOR" environment variable is set to. ie: export EDITOR=/usr/bin/pico which is located at ~/.profile under Mac OS X.

Ctrl+x Ctrl+e

List all running network connections (including which app they belong to)

lsof -i -nP

Clear the Terminal's search history (Another of my favourites)

history -c

25条回答
Deceive 欺骗
2楼-- · 2019-03-07 11:08

You would be better of if you keep a cheatsheet with you... there is no single command that can be termed most useful. If a perticular command does your job its useful and powerful

Edit you want powerful shell scripts? shell scripts are programs. Get the basics right, build on individual commands and youll get what is called a powerful script. The one that serves your need is powerful otherwise its useless. It would have been better had you mentioned a problem and asked how to solve it.

查看更多
贼婆χ
3楼-- · 2019-03-07 11:09

The power of this tools (grep find, awk, sed) comes from their versatility, so giving a particular case seems quite useless.

man is the most powerful comand, because then you can understand what you type instead of just blindly copy pasting from stack overflow.

Example are welcome, but there are already topics for tis. My most used :

grep something_to_find * -R

which can be replaced by ack and

find | xargs 

find with results piped into xargs can be very powerful

查看更多
爷的心禁止访问
4楼-- · 2019-03-07 11:09
for card in `seq 1 8` ;do  
  for ts in `seq  1 31` ; do 
     echo $card $ts >>/etc/tuni.cfg;
   done
 done 

was better than writing the silly 248 lines of config by hand.

Neded to drop some leftover tables that all were prefixed with 'tmp'

for table in `echo show tables | mysql quotiadb |grep ^tmp` ; do
  echo drop table $table
done

Review the output, rerun the loop and pipe it to mysql

查看更多
何必那么认真
5楼-- · 2019-03-07 11:09

To run in parallel several processes without overloading too much the machine (in a multiprocessor architecture):

NP=`cat /proc/cpuinfo | grep processor | wc -l`
#your loop here
    if [ `jobs | wc -l` -gt $NP ];
    then
         wait
    fi
    launch_your_task_in_background&
#finish your loop here
查看更多
何必那么认真
6楼-- · 2019-03-07 11:11

some of you might disagree with me, but nevertheless, here's something to talk about. If one learns gawk ( other variants as well) throughly, one can skip learning and using grep/sed/wc/cut/paste and a few other *nix tools. all you need is one good tool to do the job of many combined.

查看更多
叛逆
7楼-- · 2019-03-07 11:11

Some way to search (multiple) badly formatted log files, in which the search string may be found on an "orphaned" next line. For example, to display both the 1st, and a concatenated 3rd and 4th line when searching for id = 110375:

[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:02] [INFO] ...; id = 238998; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception
...; id = 110375; ...
[2008-11-08 07:07:05] [INFO] ...; id = 800612; ...

I guess there must be better solutions (yes, add them...!) than the following concatenation of the two lines using sed prior to actually running grep:

#!/bin/bash

if [ $# -ne 1 ]
then
  echo "Usage: `basename $0` id"
  echo "Searches all myproject's logs for the given id"
  exit -1
fi  

# When finding "caught exception" then append the next line into the pattern
# space bij using "N", and next replace the newline with a colon and a space
# to ensure a single line starting with a timestamp, to allow for sorting
# the output of multiple files:
ls -rt /var/www/rails/myproject/shared/log/production.* \
  | xargs cat | sed '/caught exception$/N;s/\n/: /g' \
  | grep "id = $1" | sort

...to yield:

[2008-11-08 07:07:01] [INFO] ...; id = 110375; ...
[2008-11-08 07:07:03] [ERROR] ... caught exception: ...; id = 110375; ...

Actually, a more generic solution would append all (possibly multiple) lines that do not start with some [timestamp] to its previous line. Anyone? Not necessarily using sed, of course.

查看更多
登录 后发表回答