Feb 072012
I hit this problem in my project which is hadoop-based:
Cannot run program "chmod": java.io.IOException: error=12, Cannot allocate memory
Did some research but found nothing useful – everybody mentioned it’s JDK’s problem not using fork()+exec() which caused excessive memory allocated during spawning new process for running shell command. However, it’s weird that I hit this problem on my AWS micro instance only, not on my MacBook, so I moved on to check some more –
It turned out swap is a problem, my micro instance in AWS does not have swap enabled (i.e. zero swap space), after add 1G swap everything’s fine now.
I’m a Java newbie, so my question is, though it got solved, did I do something properly?
One Response to “Hadoop – java.io.IOException: error=12, Cannot allocate memory”
Sorry, the comment form is closed at this time.
BTW, write down how to turn on a file based swap space so not to spend time on searching …
# all with root
dd if=/dev/zero of=/root/swapfile bs=1M count=1024
chmod 600 /root/swapfile
mkswap /root/swapfile
swapon /root/swapfile
swapon -s # to make sure it IS on
echo "/root/myswapfile swap swap defaults 0 0" >> /etc/fstab # to make it enabled after reboot