금요일, 12월 28, 2012

[MS Exchange] public folder

firstly, should know the folder path!


>Add-PublicFolderClientPermission -Identity "\Marketing\West Coast" -AccessRights PublishingEditor -User Kim


수요일, 11월 28, 2012

[Linux] logrotate

#vi /etc/logrotate.conf
#vi /etc/logrotate.d/apache
Function
compress This is used to compress the rotated log file with gzip.
nocompress This is used when you do not want to compress rotated log files.
copytruncate This is used when processes are still writing information to open log files. This option copies the active log file to a backup and truncates the active log file.
nocopytruncate This copies the log files to backup, but the open log file is not truncated.
create mode owner group This rotates the log file and creates a new log file with the specified permissions, owner, and group. The default is to use the same mode, owner, and group as the original file.
nocreate This prevents the creation of a new log file.
delaycompress When used with the compress option, the rotated log file is not compressed until the next time it is cycled.
nodelaycompress This overrides delaycompress. The log file is compressed when it is cycled.
errors address This mails logrotate errors to an address.
ifempty With this, the log file is rotated even if it is empty. This is the default for logrotate.
notifempty This does not rotate the log file if it is empty.
mail address This mails log files that are cycled to an address. When mail log files are cycled, they are effectively removed from the system.
nomail When mail log files are cycled, a copy is not mailed.
olddir directory With this, cycled log files are kept in the specified directory. This directory must be on the same filesystem as the current log files.
noolddir Cycled log files are kept in the same directory as the current log files.
prerotate/endscript These are statements that enclose commands to be executed prior to a log file being rotated. The prerotate and endscript keywords must appear on a line by themselves.
postrotate/endscript These are statements that enclose commands to be executed after a log file has been rotated. The postrotate and endscript keywords must appear on a line by themselves.
daily This is used to rotate log files daily.
weekly This is used to rotate log files weekly.
monthly This is used to rotate log files monthly.
rotate count This specifies the number of times to rotate a file before it is deleted. A count of 0 (zero) means no copies are retained. A count of 5 means five copies are retained.
tabootext [+] list This directs logrotate to not rotate files with the specified extension. The default list of extensions is .rpm-orig, .rpmsave, v, and ~.
size size With this, the log file is rotated when the specified size is reached. Size may be specified in bytes (default), kilobytes (sizek), or megabytes (sizem).

월요일, 10월 29, 2012

[Linux] how to use 'find'

Locating Files:

The find command is used to locate files on a Unix or Linux system.  find will search any set of directories you specify for files that match the supplied search criteria.  You can search for files by name, owner, group, type, permissions, date, and other criteria.  The search is recursive in that it will search all subdirectories too.  The syntax looks like this:
find where-to-look criteria what-to-do
All arguments to find are optional, and there are defaults for all parts.  (This may depend on which version of find is used.  Here we discuss the freely available Gnu version of find, which is the version available on YborStudent.)  For example, where-to-look defaults to . (that is, the current working directory), criteria defaults to none (that is, select all files), and what-to-do (known as the find action) defaults to ‑print (that is, display the names of found files to standard output).  Technically, the criteria and actions are all known as find primaries.
For example:
find
will display the pathnames of all files in the current directory and all subdirectories.  The commands
find . -print
find -print
find .
do the exact same thing.  Here's an example find command using a search criterion and the default action:
find / -name foo
This will search the whole system for any files named foo and display their pathnames.  Here we are using the criterion -name with the argument foo to tell find to perform a name search for the filename foo.  The output might look like this:
/home/wpollock/foo
/home/ua02/foo
/tmp/foo
If find doesn't locate any matching files, it produces no output.
The above example said to search the whole system, by specifying the root directory (“/”) to search.  If you don't run this command as root, find will display a error message for each directory on which you don't have read permission.  This can be a lot of messages, and the matching files that are found may scroll right off your screen.  A good way to deal with this problem is to redirect the error messages so you don't have to see them at all:
find / -name foo 2>/dev/null
You can specify as many places to search as you wish:
find /tmp /var/tmp . $HOME -name foo

Advanced Features and Applications:

The “‑print” action lists the names of files separated by a newline.  But it is common to pipe the output of find into xargs, which uses a space to separate file names.  This can lead to a problem if any found files contain spaces in their names, as the output doesn't use any quoting.  In such cases, when the output of find contains a file name such as “foo bar” and is piped into another command, that command “sees” two file names, not one file name containing a space.  Even without using xargs you could have a problem if the file name contains a newline character.
In such cases you can specify the action “‑print0” instead.  This lists the found files separated not with a newline but with a null (or “NUL”) character, which is not a legal character in Unix or Linux file names.  Of course the command that reads the output of find must be able to handle such a list of file names.  Many commands commonly used with find (such as tar or cpio) have special options to read in file names separated with NULs instead of spaces.
Instead of having find list the files, it can run some command for each file found, using the “‑exec” action.  The ‑exec is followed by some shell command line, ended with a semicolon (“;”).  (The semicolon must be quoted from the shell, so find can see it!)  Within that command line, the word “{}” will expand out to the name of the found file.  See below for some examples.
You can use shell-style wildcards in the -name search argument:
find . -name foo\*bar
This will search from the current directory down for foo*bar (that is, any filename that begins with foo and ends with bar).  Note that wildcards in the name argument must be quoted so the shell doesn't expand them before passing them to find.  Also, unlike regular shell wildcards, these will match leading periods in filenames.  (For example “find -name \*.txt”.)
You can search for other criteria beside the name.  Also you can list multiple search criteria.  When you have multiple criteria any found files must match all listed criteria.  That is, there is an implied Boolean AND operator between the listed search criteria.  find also allows OR and NOT Boolean operators, as well as grouping, to combine search criteria in powerful ways (not shown here.)
Here's an example using two search criteria:
find / -type f -mtime -7 | xargs tar -rf weekly_incremental.tar
gzip weekly_incremental.tar
will find any regular files (i.e., not directories or other special files) with the criteria “‑type f”, and only those modified seven or fewer days ago (“‑mtime ‑7”).  Note the use of xargs, a handy utility that coverts a stream of input (in this case the output of find) into command line arguments for the supplied command (in this case tar, used to create a backup archive).
Using the tar option “‑c” is dangerous here;  xargs may invoke tar several times if there are many files found, and each “‑c” will cause tar to over-write the previous invocation.  The “‑r” option appends files to an archive.  Other options such as those that would permit filenames containing spaces would be useful in a “production quality” backup script.
Another use of xargs is illustrated below.  This command will efficiently remove all files named core from your system (provided you run the command as root of course):
find / -name core | xargs /bin/rm -f
find / -name core -exec /bin/rm -f '{}' \; # same thing
find / -name core -delete                  # same if using Gnu find
The last two forms run the rm command once per file, and are not as efficient as the first form, but they are safer if file names contain spaces or newlines.  The first form can be made safer if rewritten to use “‑print0” instead of (the default) “‑print”.  “‑exec” can be used more efficiently (see Using ‑exec Efficiently below), but doing so means running the command once with many file names passed as arguments, and so has the same safety issues as with xargs.
One of my favorite of the find criteria is used to locate files modified less than 10 minutes ago.  I use this right after using some system administration tool, to learn which files got changed by that tool:
find / -mmin -10
(This search is also useful when I've downloaded some file but can't locate it, only in that case “‑cmin” may work better.  Keep in mind neither of these criteria is standard; “‑mtime” and “‑ctime” are standard, but use days and not minutes.)
Another common use is to locate all files owned by a given user (“-user username”).  This is useful when deleting user accounts.
You can also find files with various permissions set.  “-perm /permissions” means to find files with any of the specified permissions on, “-perm -permissions” means to find files with all of the specified permissions on, and “-perm permissions” means to find files with exactly permissionsPermissions can be specified either symbolically (preferred) or with an octal number.  The following will locate files that are writeable by “others” (including symlinks, which should be writeable by all):
find . -perm -o=w
(Using -perm is more complex than this example shows.  You should check both the POSIX documentation for find (which explains how the symbolic modes work) and the Gnu find man page (which describes the Gnu extensions).
When using find to locate files for backups, it often pays to use the “-depth” option (really a criterion that is always true), which forces the output to be depth-first—that is, files first and then the directories containing them.  This helps when the directories have restrictive permissions, and restoring the directory first could prevent the files from restoring at all (and would change the time stamp on the directory in any case).  Normally, find returns the directory first, before any of the files in that directory.  This is useful when using the “‑prune” action to prevent find from examining any files you want to ignore:
find / -name /dev -prune ...other criteria | xargs tar ...
Using just “find / -name /dev ‑prune | xargs tar ...” won't work as most people might expect.  This says to only find files named “/dev”, and then (if a directory) don't descend into it.  So you only get the single directory name “/dev”!  A better plan is to use the following:
find / ! -path /dev\* |xargs ...
which says find everything except pathnames that start with “/dev”.  The “!” means Boolean NOT.
When specifying time with find options such as ‑mmin (minutes) or ‑mtime (24 hour periods, starting from now), you can specify a number “n” to mean exactly n, “-n” to mean less than n, and “+n” to mean more than n.
Fractional 24-hour periods are truncated!  That means that “find ‑mtime +1” says to match files modified two or more days ago.
For example:
find . -mtime 0   # find files modified between now and 1 day ago
                  # (i.e., within the past 24 hours)
find . -mtime -1  # find files modified less than 1 day ago
                  # (i.e., within the past 24 hours, as before)
find . -mtime 1   # find files modified between 24 and 48 hours ago
find . -mtime +1  # find files modified more than 48 hours ago

find . -mmin +5 -mmin -10 # find files modified between
                          # 6 and 9 minutes ago
Using the (non-standard) “‑printf” action instead of the default “‑print” is useful to control the output format better than you can with the ls or dir utilities.  You can use find with the ‑printf action to produce output that can easily be parsed by other utilities or imported into spreadsheets or databases.  See the Gnu find man page for the dozens of possibilities with the ‑printf action.  (In fact, find with ‑printf is more versatile than ls; it is the preferred tool for forensic examiners even on Windows systems, to list file information.)  For example the following displays non-hidden (no leading dot) files in the current directory only (no subdirectories), with an custom output format:
find . -maxdepth 1 -name '[!.]*' -printf 'Name: %16f Size: %6s\n'
‑maxdepth” is a Gnu extension.  On a modern, POSIX version of find you could use this:
find . -path './*' -prune ...
On any version of find you can use this more complex (but portable) code:
find . ! -name . -prune ...
which says to “prune” (don't descend into) any directories except “.”.
Note that “‑maxdepth 1” will include “.” unless you also specify “‑mindepth 1”.  A portable way to include “.” is:
 find . \( -name . -o -prune \) ...
The “\(” and “\)” are just parenthesis used for grouping, and escaped from the shell.  The “-o” means Boolean OR.
[This information posted by Stephane Chazelas, on 3/10/09 in newsgroup comp.unix.shell.]
As a system administrator, you can use find to locate suspicious files (e.g., world writable files, files with no valid owner and/or group, SetUID files, files with unusual permissions, sizes, names, or dates).  Here's a final more complex example (which I saved as a shell script):
find / -noleaf -wholename '/proc' -prune \
     -o -wholename '/sys' -prune \
     -o -wholename '/dev' -prune \
     -o -wholename '/windows-C-Drive' -prune \
     -o -perm -2 ! -type l  ! -type s \
     ! \( -type d -perm -1000 \) -print
This says to seach the whole system, skipping the directories /proc, /sys, /dev, and /windows-C-Drive (presumably a Windows partition on a dual-booted computer).  The Gnu -noleaf option tells find not to assume all remaining mounted filesystems are Unix file systems (you might have a mounted CD for instance).  The “-o” is the Boolean OR operator, and “!” is the Boolean NOT operator (applies to the following criteria).
So these criteria say to locate files that are world writable (“-perm -2”, same as “-o=w”) and NOT symlinks (“! ‑type l”) and NOT sockets (“! ‑type s”) and NOT directories with the sticky (or text) bit set (“! \( ‑type d -perm -1000 \)”).  (Symlinks, sockets and directories with the sticky bit set are often world-writable and generally not suspicious.)
A common request is a way to find all the hard links to some file.  Using “ls -li file” will tell you how many hard links the file has, and the inode number.  You can locate all pathnames to this file with:
  find mount-point -xdev -inum inode-number
Since hard links are restricted to a single filesystem, you need to search that whole filesystem so you start the search at the filesystem's mount point.  (This is likely to be either “/home” or “/” for files in your home directory.)  The “-xdev” options tells find to not search any other filesystems.
(While most Unix and all Linux systems have a find command that supports the “-inum” criterion, this isn't POSIX standard.  Older Unix systems provided the “ncheck” utility instead that could be used for this.)

Using ‑exec Efficiently:

The ‑exec action takes a command (along with its options) as an argument.  The arguments should contain {} (usually quoted), which is replaced in the command with the name of the currently found file.  The command is terminated by a semicolon, which must be quoted (“escaped”) so the shell will pass it literally to the find command.
To use a more complex action with ‑exec, you can use “sh ‑c complex-command” as the Unix command.  Here's a somewhat contrived example, that for each found file replaces “Mr.” with “Mr. or Ms.”, and also converts the file to uppercase:
   find whatever... -exec sh -c 'sed "s/Mr\./Mr. or Ms./g" "{}" \
     | tr "[:lower:]" "[:upper:]" >"{}.new"' \;
The ‑exec action in find is very useful, but since it runs the command listed for every found file it isn't very efficient.  On a large system this makes a difference!  One solution is to combine find with xargs as discussed above:
  find whatever... | xargs command
However this approach has two limitations.  Firstly not all commands accept the list of files at the end of the command.  A good example is cp:
find . -name \*.txt | xargs cp /tmp  # This won't work!
(Note the Gnu version of cp has a non-POSIX option “‑t” for this, and xargs has options to handle this too.)
Secondly, filenames may contain spaces or newlines, which would confuse the command used with xargs.  (Again Gnu tools have options for that, “find ... ‑print0 | xargs -0 ...”.)
There are POSIX (but non-obvious) solutions to both problems.  An alternate form of ‑exec ends with a plus-sign, not a semi-colon.  This form collects the filenames into groups or sets, and runs the command once per set.  (This is exactly what xargs does, to prevent argument lists from becoming too long for the system to handle.)  In this form the {} argument expands to the set of filenames.  For example:
find / -name core -exec /bin/rm -f '{}' +
This command is equivalent to using find with xargs, only a bit shorter and more efficient.  But this form of ‑exec can be combined with a shell feature to solve the other problem (names with spaces).  The POSIX shell allows us to use:
sh -c 'command-line' [ command-name [ args... ] ]
(We don't usually care about the command-name, so “X”, “dummy”, or “'inline cmd'” is often used.)  Here's an example of efficiently copying found files to /tmp, in a POSIX-compliant way (Posted on comp.unix.shell netnews newsgroup on Oct. 28 2007 by Stephane CHAZELAS):
find . -name '*.txt' -type f \
  -exec sh -c 'exec cp -f "$@" /tmp' X '{}' +
(Obvious, simple, and readable, isn't it?  Perhaps not, but worth knowing since it is safe, portable, and efficient.)

Common “Gotcha”:

If the given expression to find does not contain any of the “action” primaries ‑exec, -ok, or ‑print, the given expression is effectively replaced by:
find \( expression \) -print
The implied parenthesis can cause unexpected results.  For example, consider these two similar commands:
$ find -name tmp -prune -o -name \*.txt
./bin/data/secret.txt
./tmp
./missingEOL.txt
./public_html/graphics/README.txt
./datafile2.txt
./datafile.txt
$ find -name tmp -prune -o -name \*.txt -print
./bin/data/secret.txt
./missingEOL.txt
./public_html/graphics/README.txt
./datafile2.txt
./datafile.txt
The lack of an action in the first command means it is equivalent to:
find . \( -name tmp -prune -o -name \*.txt \) -print
This causes tmp to be included in the output.  However for the second find command the normal rules of Boolean operator precedence apply, so the pruned directory does not appear in the output.
The find command can be amazingly useful.  See the man page to learn all the criteria and actions you can use

금요일, 10월 26, 2012

[Linux] undefined mssql_connect

#apt-get install php5-sybase

[apache]fail to start

! check the log file!

I had to make a directory to hold the log file.
/var/log/apache2 is the default log file location.
But the old web server has been using /var/log/apache for its log.
I created /var/log/apache directory on the new server and it fixed.

수요일, 10월 24, 2012

[Debian] accounts migration

Commands to type on old Linux system

First create a tar ball of old uses (old Linux system). Create a directory:
# mkdir /root/move/
Setup UID filter limit:
# export UGIDLIMIT=500
Now copy /etc/passwd accounts to /root/move/passwd.mig using awk to filter out system account (i.e. only copy user accounts)
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > /root/move/passwd.mig
Copy /etc/group file:
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534)' /etc/group > /root/move/group.mig
Copy /etc/shadow file:
# awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534) {print $1}' /etc/passwd | tee - |egrep -f - /etc/shadow > /root/move/shadow.mig
Copy /etc/gshadow (rarely used):
# cp /etc/gshadow /root/move/gshadow.mig
Make a backup of /home and /var/spool/mail dirs:
# tar -zcvpf /root/move/home.tar.gz /home
# tar -zcvpf /root/move/mail.tar.gz /var/spool/mail

Where,
  • Users that are added to the Linux system always start with UID and GID values of as specified by Linux distribution or set by admin. Limits according to different Linux distro:
    • RHEL/CentOS/Fedora Core : Default is 500 and upper limit is 65534 (/etc/libuser.conf).
    • Debian and Ubuntu Linux : Default is 1000 and upper limit is 29999 (/etc/adduser.conf).
  • You should never ever create any new system user accounts on the newly installed Cent OS Linux. So above awk command filter out UID according to Linux distro.
  • export UGIDLIMIT=500 - setup UID start limit for normal user account. Set this value as per your Linux distro.
  • awk -v LIMIT=$UGIDLIMIT -F: '($3>=LIMIT) && ($3!=65534)' /etc/passwd > /root/move/passwd.mig - You need to pass UGIDLIMIT variable to awk using -v option (it assigns value of shell variable UGIDLIMIT to awk program variable LIMIT). Option -F: sets the field separator to : . Finally awk read each line from /etc/passwd, filter out system accounts and generates new file /root/move/passwd.mig. Same logic is applies to rest of awk command.
  • tar -zcvpf /root/move/home.tar.gz /home - Make a backup of users /home dir
  • tar -zcvpf /root/move/mail.tar.gz /var/spool/mail - Make a backup of users mail dir
Use scp or usb pen or tape to copy /root/move to a new Linux system.
# scp -r /root/move/* user@new.linuxserver.com:/path/to/location

Commands to type on new Linux system

First, make a backup of current users and passwords:
# mkdir /root/newsusers.bak
# cp /etc/passwd /etc/shadow /etc/group /etc/gshadow /root/newsusers.bak

Now restore passwd and other files in /etc/
# cd /path/to/location
# cat passwd.mig >> /etc/passwd
# cat group.mig >> /etc/group
# cat shadow.mig >> /etc/shadow
# /bin/cp gshadow.mig /etc/gshadow

Please note that you must use >> (append) and not > (create) shell redirection.
Now copy and extract home.tar.gz to new server /home
# cd /
# tar -zxvf /path/to/location/home.tar.gz

Now copy and extract mail.tar.gz (Mails) to new server /var/spool/mail
# cd /
# tar -zxvf /path/to/location/mail.tar.gz

Now reboot system; when the Linux comes back, your user accounts will work as they did before on old system:
# reboot
Please note that if you are new to Linux perform above commands in a sandbox environment. Above technique can be used to UNIX to UNIX OR UNIX to Linux account migration. You need to make couple of changes but overall the concept remains the same.

[Debian] install integration Services for hyper-v

Download
Debian squeeze backports kernel 3.2.23 with LIC 3.4
wget -O linux-image-3.2.23-hyperv_3.4_amd64.deb http://docs.homelinux.org/lib/exe/fetch.php?media=linux-image-3.2.23-hyperv_3.4_amd64.deb
wget -O linux-headers-3.2.23-hyperv_3.4_amd64.deb http://docs.homelinux.org/lib/exe/fetch.php?media=linux-headers-3.2.23-hyperv_3.4_amd64.deb
dpkg -i linux-image-3.2.23-hyperv_3.4_amd64.deb linux-headers-3.2.23-hyperv_3.4_amd64.deb
 
 
 http://forum.osxlatitude.com/index.php?/topic/1716-install-hyper-v-integration-services-on-debian-5x/

목요일, 7월 19, 2012

[MS Windows] net use for map network drive

#net use X: \\[server name]\[shared name]

to remove the map drive,

#net use X: /delete



How do I share a folder?

To create a new local file share, use the following NET SHARE command:
NET SHARE sharename=drive:path /REMARK:"My shared folder" [/CACHE:Manual | Automatic | No ]
This is what it would look like in the real world:
NET SHARE MySharedFolder=c:\Documents /REMARK:"Docs on server ABC"

How do I limit how many users can access my shared folder?

To limit the number of users who can connect to a shared folder, you would use the following NET SHARE command:
NET SHARE sharename /USERS:number /REMARK:"Shared folder with limited number of users"
To remove any limit on the number of users who can connect to a shared folder, use the following:
NET SHARE sharename /UNLIMITED /REMARK:"Folder with unlimited access"
This will allow unlimited number of users to connect to the shared resource.

How do I remove sharing from a folder?

You can accomplish this using the following NET SHARE command again. If you want to delete a share, then execute the following:
NET SHARE {sharename | devicename | drive:path} /DELETE
To delete all shares that apply to a given device, you would use the following:
NET SHARE devicename /DELETE
In this case the devicename can be a printer (Lpt1) or a pathname (for example C:\MySharedFolder\).



Network Drive Mappings and NET USE

The following information pertains to Windows and the SmartBatch 2009 Executive Server running as a Service.
A Service provides a batch logon as opposed to an interactive logon.  An interactive logon provides additional capability such as mapped drives.  If you require the use of network drive letters such as  f:), they must be mapped in an Operation before they are used.  To do this use the NET USE capability.  This can be placed in a .bat file and executed via an Operation or can be placed directly in an Operation.
The following shows the general syntax for the NET USE command:
net use [devicename | *] [\\computername\sharename[\volume]] [password | *]] [/user:[domainname\]
username] [[/delete] | [/persistent:{yes | no}]]
net use devicename [/home[password | *]] [/delete:{yes | no}]
net use [/persistent:{yes | no}]
You can type net use without parameters from a command prompt to obtain a list of network connections.

Examples

Using a .bat file

This example shows how to map three network drives using a .bat file named MyNetUse.bat:
net use o: \\LA\cdrive password /USER:myAccount
net use p:\\NY\cdrive password /USER:myAccount
net use q:\\SF\edrive password /USER:myAccount
The MyNetUse.bat file can be added as an Operation within SmartBatch 2009.

To configure the NET USE capability directly into an Operation

cmd.exe /c net use o:\\LA\cdrive password /USER:myAccount

To delete a drive mapping

net use o: /delete


월요일, 6월 25, 2012

[Linux] mysql data migration

to find out where the mysql data located,
#vi /etc/my.cnf
datadir=/var/lib/mysql/

in order to backup the data,
#tar fvcz mysql.bak.tar.gz /var/lib/mysql

and go to the new server, on your directory or desired one,
#tar fvxz mysql.bak.tar.gz

doing this, all the mysql accounts will be transffered as well.
Check if you can connect to mysql with your credential you used to use on the old server.


일요일, 6월 17, 2012

[Linux]NFS

#apt-get install nfs-kernel-server nfs-common rpcbind

[Server side]
in /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
#               to NFS clients.  See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes       hostname1(rw,sync) hostname2(ro,sync)
#
# Example for NFSv4:
# /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt)
# /srv/nfs4/homes  gss/krb5i(rw,sync)
#
/home           192.168.0.101(rw,sync,no_root_squash)
/var/nfs        192.168.0.101(rw,sync)
 
(The no_root_squash option makes that /home will be accessed as root.)
Whenever we modify /etc/exports, we must run

#exportfs -a

afterwards to make the changes effective.

[Client side]
mkdir -p /mnt/nfs/home
mkdir -p /mnt/nfs/var/nfs
Afterwards, we can mount them as follows:
mount 192.168.0.100:/home /mnt/nfs/home
mount 192.168.0.100:/var/nfs /mnt/nfs/var/nfs

 

[MS Windows]Where putty stores its data

HKEY_CURRENT_USER\Software\SimonTatham\PuTTY

화요일, 5월 15, 2012

[MS WinXP] remote desktop registry

HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server

fDenyTSConnection
change it to 0 to enable RDC 

화요일, 5월 08, 2012

[MS Windows Vista] Search Index in Outlook problem

Rebuilding the Search Index in Windows Vista

If you are encountering problems with the searching engine built into Windows Vista, your best bet is to tell the indexing service to completely rebuild the index. It will take a while to rebuild, but it’s usually worth it.
It’s important to note that the search indexing in Windows Vista also handles searching in Microsoft Outlook 2007, so if you are encountering errors there you have another troubleshooting step other than disabling instant search.
Type in indexing into the start menu search box to launch Indexing Options:
image
Once the dialog opens, you’ll want to choose the “Advanced” button, which will give you a UAC prompt.

Now you can simply click the Rebuild button, and the search index will be wiped clean and regenerated.

This process does take a long time if you have many files being indexed, so you might want to consider trimming down the indexed locations.

[MS Windows XP] Command prompt on the folder you want to be

Windows Registry Editor Version 5.00

[HKEY_CLASSES_ROOT\*\shell]

[HKEY_CLASSES_ROOT\*\shell\executecmd]
@="Open Command Line on this Location"

[HKEY_CLASSES_ROOT\*\shell\executecmd\command]
@="cmd.exe"

[HKEY_CLASSES_ROOT\Directory\shell\executecmd]
@="Open Command Line on this Location"

[HKEY_CLASSES_ROOT\Directory\shell\executecmd\command]
@="cmd.exe"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\*\shell\executecmd]
@="Open Command Line on this Location"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\*\shell\executecmd\command]
@="cmd.exe"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\executecmd]
@="Open Command Line on this Location"

[HKEY_LOCAL_MACHINE\SOFTWARE\Classes\Directory\shell\executecmd\command]
@="cmd.exe"