Where I just started work we run large processes for simulations and testing of semi-conductors. Currently we use Solaris because of past limitations on the amount of RAM that a single process can address under Linux. Recently we tried to run some tests on a Dell dual Xeon 1.7GHz with 4GB of RAM running Redhat 7.1 box (with a stock kernel). Speedwise it kicked the crap out of our Sunblade but we had problems with process dying right around 2.3GB (according to top).

So I started to investigate, and quickly discovered that there is no good source for finding this sort of information on line. At least not that I could find. Nearly every piece of information I found conflicted in at least some small way with another piece of information I found. So I asked the Linux Kernel mailing list in the hopes of a definitive answer.

See also: Kernel Traffic

Here is a summary of the information I gathered. While probably not totally accurate it should be a good rule of thumb and not being a programmer or a hardware hack the details are at the very fringes of my comprehension and make my head hurt.

Some of the URL's whence I dragged this infomation from:

A more recent discussion from an Intel forum:

Responses from my post to the Linux Kernel mailing list:

From: Albert D. Cahalan 
Subject: Re: What is the truth about Linux 2.4's RAM limitations?

> RAM)but we had problems with process dying right around 2.3GB (according
> to top).

Out of 3 GB, you had 2.3 GB used and 0.7 GB of tiny chunks of
memory that were smaller than what you tried to allocate.

>  * What is the maximum amount of RAM that a *single* process can address
>    under a 2.4 kernel, with PAE enabled?  Without?

Just the same: 3 GB.

>  * And, what (if any) paramaters can effect this (recompiling the app
>    etc)?

There is a kernel patch that will get you to 2.0 or 3.5 GB.
The limit is 4 GB minus a power of two big enough for the kernel.

> Linux 2.4 does support greater then 4GB of RAM with these caveats ...
>  * It does this by supporting Intel's PAE (Physical Address Extension)
>    features which are in all Pentium Pro and newer CPU's.
>  * The PAE extensions allow up to a maximum of 64GB of RAM that the OS
>    (not a process) can address.
>  * It does this via indirect pointers to the higher memory locations, so
>    there is a CPU and RAM hit for using this.

Sort of. It maps and unmaps memory to access it. You suffer this
with the 4 GB option as well.

>  * Benchmarks seem to indicated around 3-6% CPU hit just for using the PAE
>    extensions (ie. it applies regardless of whether you are actually
>    accessing memory locations greater then 4GB).
>  * If the kernel is compiled to use PAE, Linux will not boot on a computer
>    whose hardware doesn't support PAE.
>  * PAE does not increase Linux's ability for *single* processes to see
>    greater then 3GB of RAM (see 

I think you mean "Without".

The 4 GB limit is really less, depending on your hardware and BIOS.
Your BIOS will create a memory hole below 4 GB large enough for all
your PCI devices. This hole might be 1 or 2 GB.

>  * With 2.4 kernels (with a large memory configuration) a single process
>    can address up to the total amount of RAM in the machine minus 1GB
>    (reserved for the kernel), to a maximum 3GB.
>  * By default the kernel reserves 1GB for it's own use, however I think
>    that this is a tunable parameter so if we have 4GB of RAM in a box we
>    can tune it so that most of that should be available to the processes
>    (?).

Yes. Then you suffer more map/unmap overhead.

From: Jonathan Lundell 
Subject: Re: What is the truth about Linux 2.4's RAM limitations?

At 1:01 PM -0700 2001-07-09, Adam Shand wrote:
>So what are the limits without using PAE? Here I'm still having a little
>problem finding definitive answers but ...
>  * With PAE compiled into the kernel the OS can address a maximum of 4GB
>    of RAM.

Do you mean "Without..."?

>  * With 2.4 kernels (with a large memory configuration) a single process
>    can address up to the total amount of RAM in the machine minus 1GB
>    (reserved for the kernel), to a maximum 3GB.
>  * By default the kernel reserves 1GB for it's own use, however I think
>    that this is a tunable parameter so if we have 4GB of RAM in a box we
>    can tune it so that most of that should be available to the processes
>    (?).

include/asm-i386/page.h has the key to this partitioning:

  * This handles the memory map.. We could make this a config
  * option, but too many people screw it up, and too few need
   * it.
   * A __PAGE_OFFSET of 0xC0000000 means that the kernel has
  * a virtual address space of one gigabyte, which limits the
  * amount of physical memory you can use to about 950MB.
  * If you want more physical memory than this then see the CONFIG_HIGHMEM4G
  * and CONFIG_HIGHMEM64G options in the kernel configuration.

#define __PAGE_OFFSET           (0xC0000000)

Whether you could simply bump __PAGE_OFFSET up to (say) 0xE0000000 
and get 3.5GB of user-addressable memory I have no idea, but this is 
where you'd have to start.

Also keep in mind the distinction between virtual and physical 
addresses. A process has virtual addresses that must fit into 32 
bits, so 4GB is the most that can be addressed without remapping part 
of virtual space to some other physical space.

Also of interest is Chapter 3 of "IA-32 Intel Architecture Software 
Developer's Manual Volume 3: System Programming Guide", which you can 
find at http://developer.intel.com/design/PentiumIII/manuals/

Keep in mind that Linux uses the flat (unsegmented) model.

PAE extends physical addresses only (to 36 bits), and does nothing 
for virtual space.

From: Andi Kleen 
Subject: Re: What is the truth about Linux 2.4's RAM limitations?

>  * And, what (if any) paramaters can effect this (recompiling the app
>    etc)?

The kernel parameter is a constant called __PAGE_OFFSET which you can 
set. You also need to edit arch/i386/vmlinux.lds

The reason why your simulation stopped at 2.3GB is likely that the malloc 
allocation hit the shared libraries (check with /proc/<pid>/maps). Ways 
around that are telling malloc to use mmap more aggressively (see the 
malloc documentation in info libc) or moving the shared libraries up  
by changing a kernel constant called TASK_UNMAPPED_BASE.

Date: Wed, 19 Feb 2003 11:50:50 +0000 
From: Duncan Mckenzie
Subject: Re: Memory limits of the intel processor

Adam Shand wrote:
> hi duncan,
> the short answer is that it's not possible with a 32bit operating 
> system.  the maximum size for a single process is:
> 4gb - 1gb (reserved for the kernel) - "overhead"
> where overhead is size of the executable plus the size of any linked 
> libraries etc.
> there's more information here, please add to it if you learn anything new.
> http://www.spack.org/index.cgi/LinuxRamLimits

Dear Adam

Thanks for replying. I rather feared that this would be the case.

In case it is of interest to you, I have been using RedHat 8.0, kernel 2.4.20 
on my dual Xeon machine. I looked into other flavours of UNIX, such as SCO UnixWare,
which elicited the following response from their pre-sales technical department:

> Duncan, 
> You have posed a very interesting question. 
> To add a little more meat to the answer: 
> SVMMLIM: (max 0x7FFFFFFF) is the soft limit specifying the maximum address 
> space that can be mapped  to a process (HVMMLIM is the hard limit) 
> STKLIM: (max 0x7FFFFFFF) is the the maximum stack size for a process and the 
> process stack resides within the [SH]VMMLIM address space) - HSTKLIM is the 
> hard limit. 
> ie. This is 4GB - 1 byte. 
> Therefore, the maximum amount of RAM that one process can consume is no more 
> than 4GB. 
> By default, the entries in /etc/conf/cf.d/mtune are set to: 
> Value           Default         Min             Max 
> -----           -------         ---             --- 
> SVMMLIM         0x9000000       0x1000000       0x7FFFFFFF  
> HVMMLIM         0x9000000       0x1000000       0x7FFFFFFF  
> SSTKLIM         0x1000000       0x2000          0x7FFFFFFF  
> HSTKLIM         0x1000000       0x2000          0x7FFFFFFF  
> I hope that this answers your question in a little more detail. 
> Best Regards, 
> Simon 
> SCO PreSales Support

I also looked into Solaris for Intel, but found that it has a similar limit:

> Maximum: 32 Gbytes [IA based systems that use the Intel Pentium Pro 
> and subsequently released Intel CPUs can address up to 32 Gbytes of 
> physical memory. Individual processes are still limited to a maximum 
> of 3.5 Gbytes of virtual address space however.]

So I'll be trying to get my dual Xeon system sent back and replaced with 
a 64-bit system.


Date: Mon, 29 Sep 2003 11:50:50 +0000 
From: Bob Flagg <bob@calcworks.net>
Subject: Memory limits of the intel processor

Using the linux hugemem kernel, Blackdown's jdk 1.4.1_01 and 
various upgrades to system libraries, I was able to allocate  
over 3.7GB of virtual memory to the java virtual machine.  
This is all on a 32 bit Intel Pentium 4 chip.  


More information which Bob kindly emailed to me.

At Sight Software we provide process control solutions written in  
Java and  running on Linux.  For some new clients, our application 
needed to access large amounts of RAM but On RedHat 9.0 with 
Sun's JVM we were not able to allocate more than 2 GB of RAM for the 
application.  I investigated this for some time (see 
for useful information and links), asking various vendors for help and 
trying various modifications to the kernel.  The key tip came from 
Matthew Smith at RedHat, who pointed me to the posting
announcing Red Hat Enterprise Linux 3.  Based on that posting, I did the
following, which allowed access to more than 3.7Gb for the JVM:

        1.  Made a clean, standard install of Red Hat Linux 9 
            (x86 @ $39.95).
        2.  Installed the hugemem kernel. This is available at
            After downloading the RPM, as root, ran:
        >> rpm -ihv  kernel-hugemem-2.4.21-1.1931.2.399.ent.i686.rpm    
        3.  After testing this kernel, edited 
            to make default point to the hugemem kernel.
        4.  Installed apt:
                Apt is available for Red Hat 9 at 
                After downloading the apt RPM, as root, ran:
                        >> rpm -ihv apt-0.5.5cnc5-fr2.i386.rpm
                (See "A Very Apropos apt" by R. Scott Granneman, 
                Linux Magazine, 10/2003.)
        5.  Brought my system up-to-date by running the 
            following as root:
                >> apt-get update
                >> apt-get upgrade  
        6.  Installed Blackdown's j2sdk available at
            (Installation instructions are available at    
        7.  Rebooted.
        8.  Tested              
                /usr/java/j2sdk1.4.1/bin/javac  Bug4697804.java
                /usr/java/j2sdk1.4.1/bin/java  -Xmx3702m  Bug4697804
                Bug4697804.java is:
                import java.util.*;
                public class Bug4697804 {
                        public static void main(String[] args) {
                            try {
                                create(2, 1024);
                            catch (OutOfMemoryError err) {
                                System.err.println("OutOfMemoryError -
                        private static Object create(int d, int n) {
                            List x = new ArrayList(n);
                            if (d > 0) {
                                for (int i = 0; i < n; ++i) {
                                    x.add(create(d-1, n));
                            return x;
                for more info on this.

That's it.  Hope this helps.  If I've left anything out, 
please let me know and I'll try to fill in more details.

From: Richard Cownie
Subject: 3.75GB linux processes
Date: July 17, 2001 4:33:51 AM GMT+12:00


I saw your request for information about max linux process size.

Here at Ikos Systems we can successfully run with a process size
of (almost) 3.75GB and heap size of close to 3.5GB, using a
slightly hacked version of the 2.2.17 kernel (Mandrake-7.2),
together with our own memory allocator library.

The trick is to choose CONFIG_1GB and then edit the constants
in /usr/src/linux/include/asm-i386/page_offset.h like this:

#include <linux/config.h>
#ifdef CONFIG_1GB
/* limit kernel space to 256M  tich 23Oct00 */
#define PAGE_OFFSET_RAW 0xF0000000
#elif defined(CONFIG_2GB)
#define PAGE_OFFSET_RAW 0x80000000
#elif defined(CONFIG_3GB)
#define PAGE_OFFSET_RAW 0x40000000

This limits the kernel address space to 256MB, allowing 3.75GB
for the user virtual address space.  For our application, the
kernel space is not important - your mileage may vary.

The other part of the problem is that the kernel and libraries
choose to use the user address space in rather strange ways -
e.g. mmap() and the dynamic loader will put stuff somewhere in
the middle of the address space, depending on the particular
kernel configuration parameters - and it's likely that malloc()
won't cope with this gracefully.  We use our own memory allocator
which knows knows which address ranges are "safe" and allocates
these explicitly using mmap(/dev/zero, MAP_FIXED). I probably
can't release all the details of this without jumping through some
legal/management hoops - sorry! but here are some suggested
address ranges (WARNING: this involves much guesswork and comes
with absolutely no guarantee).

static Ulong FreeList_mmapRegionsA[] = { // Solaris, Linux CONFIG_256M
  0x60000000, 0x7f800000, 0, /* 0 means down from top */
  0x30000000, 0x40000000, 0,
  0x44000000, 0x50000000, 0,
  0x54000000, 0x60000000, 0,
  0x20000000, 0x24000000, 0,
  0x18000000, 0x20000000, 0,
  0x80000000, 0xbf800000, 0,
  0xbf800000, 0xc2000000, 1, /* 1 means up from bottom */
  0xc4000000, 0xef000000, 1,
  0x2a000000, 0x30000000, 0,
  0x40000000, 0x44000000, 0,
  0x14000000, 0x18000000, 0,
  0x52000000, 0x54000000, 0,
  0x10000000, 0x14000000, 0,
  0x24000000, 0x2a000000, 0,
  0x40000000, 0x44000000, 0,
  0x50000000, 0x54000000, 0,
  0xc2000000, 0xc4000000, 0,
#if pfLINUX
  0x7f800000, 0x80000000, 0,
  0xef000000, 0xef800000, 1,
  0x51000000, 0x52000000, 0,
  0x24000000, 0x26000000, 0,
  0x0a000000, 0x10000000, 0,

If any kernel/library hackers are reading this, I must say this
is a major pain compared to the way Solaris does it (heap
grows up from above code/data, stack/sharedlibs/mmap come down
from the process size limit, anything in between is up for grabs).
I'm a huge fan of Linux in general, but in this area there is
room for improvement ...

Anyway, we run this on a dual-Pentium3/1GHz w/ 4GB DRAM, 
(the Marquis C250S from www.aslab.com - a bargain at about $3500)
and it seems stable enough for serious use (10-20day uptime,
which is about as reliable as our flaky local power).

I have not yet tried any of this in the 2.4.x kernels - as far
as I can tell it should be the same, but I'm probably going to
wait for Mandrake-8.1 before attempting it (the reports of
VM problems in the early 2.4.x kernels have scared me off for now).

Good luck
   Richard Cownie, Ikos Systems


Other sources (added by Anonymous):

LinuxRamLimits (last edited 2007-12-23 19:42:28 by AdamShand)