BMTD 's Yard of Fun

Technology, Sports, Music, Chinese Essays

Browsing Posts published by mmpower

不通过应用程序而纯粹从简单服务器配置。。。transparent to your apps

1. 用 apache mod_rewrite 实现根据用户ip自动选择镜像


在 apache config 里加一个 virtual host:

    <virtualhost *:80>
        DocumentRoot /hosting/balance
               ServerName balance
    <filesmatch "\.(php|htm|html|pl|asp)">
        Allow from all

        RewriteEngine on
    RewriteLog rewrite.log
    RewriteLogLevel 9
        RewriteMap    lb      prg:/hosting/balance/ip_prg.php
        RewriteRule   ^(.*)$ ${lb:$1} [P]
        CustomLog "logs/balance_access.log" combined
        ErrorLog "logs/balance_error.log"

then I have a simple php script ip_prg.php (as configured in apache shown above) to do the redirection:
continue reading…

Since this problem seems to popup on different lists, this message has
been cross-posted to the general Red Hat discussion list, the RHEL3
(Taroon) list and the RHEL4 (Nahant) list. My apologies for not having
the time to post this summary sooner.

I would still be banging my head against this problem were it not for
the generous assistance of Tom Sightler <ttsig@xxxxxxxxxxxxx> and Brian
Long <brilong@xxxxxxxxx>.

In general, the out of memory killer (oom-killer) begins killing
processes, even on servers with large amounts (6Gb+) of RAM. In many
cases people report plenty of “free” RAM and are perplexed as to why the
oom-killer is whacking processes. Indications that this has happened
appear in /var/log/messages:
Out of Memory: Killed process [PID] [process name].

In my case I was upgrading various VMware servers from RHEL3 / VMware
GSX to RHEL4 / VMware Server. One of the virtual machines on a server
with 16Gb of RAM kept getting whacked by the oom-killer. Needless to
say, this was quite frustrating.

As it turns out, the problem was low memory exhaustion. Quoting Tom:
“The kernel uses low memory to track allocations of all memory thus a
system with 16GB of memory will use significantly more low memory than a
system with 4GB, perhaps as much as 4 times. This extra pressure
happens from the moment you turn the system on before you do anything at
all because the kernel structures have to be sized for the potential of
tracking allocations in four times as much memory.”

You can check the status of low & high memory a couple of ways:

# egrep 'High|Low' /proc/meminfo
HighTotal:     5111780 kB
HighFree:         1172 kB
LowTotal:       795688 kB
LowFree:         16788 kB

# free -lm
total       used       free     shared    buffers     cached
Mem:          5769       5751         17          0          8       5267
Low:           777        760         16          0          0          0
High:         4991       4990          1          0          0          0
-/+ buffers/cache:        475       5293
Swap:         4773          0       4773

When low memory is exhausted, it doesn’t matter how much high memory is
available, the oom-killer will begin whacking processes to keep the
server alive.

There are a couple of solutions to this problem:
continue reading…

Looking at cheap storage for hosting backup/archive… driving force: cost.  NetApp is too expensive.

The following can be taken into consideration:

— Amazon S3 web services. Since it’s only used for backup/archive the cost should not be too high. Need to analyze the cost though. Another potential issue is legal/privacy…

     Technically the best way is probably to use a S3 file system driver so the backup/access is transparent to the apps and existing apps doesn’ t need to be modified. the following is a list of  S3 file system drivers, for example:

  •   Fuse over Amazon: http://

     Note that  the Hadoop project provides two file systems that uses S3: . However seems you have to use the hadoop to access the file systems and they are not accessible by normal apps.

— GFS-like distributed file systems, So that we can use cheap/commodity intel hardware to construct the storage cluster. Currently there are two open source GFS like DFS implementations.:

  • CloudStore ( formerly Kosmos File System / KFS):   quoted form the web site: “Web-scale applications require a scalable storage infrastructure to process vast amounts of data. CloudStore (formerly, Kosmos filesystem) is an open-source high performance distributed filesystem designed to meet such an infrastructure need” It’s written in C++ and can be mounted as a file system via FUSE on linux.
  • Hadoop HDFS File System:  part of the Hadoop Core project.   Hadoop is developed in Java.  There is also some effort to mount HDFS on linux/systems:

got an error capturing an remote image using microsoft automated deploument service (ADS):

the capture job failed with the error message:
“Device or service connection does not exist”…
Here is the sequence what I did:
(1) on the reference win2003 system (remote target), installed ADS admin agent and started it
(2) on the remote target, create a directory C:\sysprep, and c:\sysprep\i386. copied sysprep.exe and other sysprep binary files to the i386 directory; copied a sysprep.inf file to c:\sysprep directory
(3) on the ADS controller, using ADS managemnt tool, added a device for the remote target, by manually entered the MAC address and ip of the remote target .
(4) took control of the device, then run job with a “image-capture-win2k” template created with the sample job sequence:

< !– start sequence –>
<sequence version=”1″ description=”capture image” command=”capture-image-w2k.xml” xmlns=”“>
  < !– STEP 1 sysprep step –>
  <task description=”sysprep target” doesReboot=”true”>
  < !– STEP 2 boot to deployment agent –>
  <task description=”Boot to deployment agent” doesReboot=”false”>
  < !– STEP 3 capture image –>
  <task description=”Capture image” doesReboot=”false”>
      <parameter>”image description”</parameter>

and the error occured at the first step: the sysprep step..

followed MS instructions, searched from googles, tried various ways but still no luck….Headache :(


作者:安普若海归茶馆 发贴, 来自【海归网】



甲乙双方本应是合同契约关系,如果大家都严格遵守合同,而且该合同起草的全面、完整、准确、规范,那么甲乙双方的关系就很简单,照着合同办事就是了。 但是,在中国首先一个问题就是往往商业合同都写得十分简单而且不规范,因为中国的商人还不习惯使用律师起草合同,许多合同都是下面的一般工作人员自己堆鼓 出来的,既没有法律知识又没有商业经验,而且老总也觉得合同并不那么重要,写个一页纸的合同就行了,所以就省了律师这两个眼珠子钱了。我见过的合同里,使 用不准确的语言,漏洞,甚至自相矛盾的比比皆是,甚至到了搞笑的程度。

中国人喜欢用简单的合同的另外一个原因还是不太习惯“先小人后君子”的办事方式,认为有些话说出来会伤了对方的面子。事实上也确实如此,有些人喜欢把许多商业上的事情take it personally,一句话,还是不够professional!

简单的合同也有好处,那么就是提高了合同签署之前和当时的效率,甲乙双方不需要为了每个条款逐字逐句的讨论了。但是合同签了之后,执行起来不 出问题则已,一旦遇到问题(而且是肯定遇到问题),那么麻烦就大了。所以美国人说:在中国,签了合同只是谈判的开始,而且不是谈判的结束。

中国的简单合同把过多的事情列为“未尽事宜,另行商议”,这样就给甲乙双方在执行合同时带来了太多变化的空间,一旦遇到合同里没有阐述的问 题,甲乙双方就可能各自按照自己的意愿去做,“一个中国,各自表述”,各自发挥自己的政治智慧,但是你要碰上对方是马英九还好,要是碰上陈水扁,那么肯定 是要闹翻脸了。这就是甲乙双方遇到的第一个风险。

continue reading…