Submit Hint Search The Forums LinksStatsPollsHeadlinesRSS
14,000 hints and counting!

10.5: How to create a RAM disk larger than 2.2GB System 10.5
There isn't a 64-bit RAM disk Mac application that will allow creation of RAM disks larger than 2.2GB (or maybe I haven't found it yet, as Google comes up with nothing useful). So instead, here is a method using the built-in tools in Leopard. I cant find the original post that had the initial RAM disk creation scripting, but I give huge props to the person that initially posted it. It was much more optimized than my own, except that mine uses the maximum allowable RAM disk size, and adds RAID functionality.

Uses: Scratch disk for iShowU and other screen capture programs; scratch disk for any program (especially Shake, Photoshop, After Effects); particle disk cache in Maya; PFTrack, etc. In short, a RAM disk is good for anything that you don't want your hard drive to be involved in --crunching huge numbers, etc.

You can create RAM disks with the $25 ramBunctious, or the free Esperance DV. Esperance DV does the exact same thing as ramBunctious, but for free and with better options and a better interface. However, as far as I know, both programs only allow the disk image to be approximately 2GB, and I have tried everything to make this limit disappear, with no luck. I also can't make multiple RAM disk images with Esperance DV.

So I came up with this method to create a RAM disk larger than 2GB: make multiple RAM disks and RAID stripe them! If you have enough RAM, give this a shot. I routinely use 10GB ram disks on my cluster machines, as it makes things ridiculously fast. I got this idea from running whole Linux distributions booted completely into RAM, and loving the immense speed.

First make sure you have enough RAM for this hint! You computer will slow to a crawl if you don't have enough RAM for the size of the RAM disk you want to make.

To create a 2.2GB RAM disk (the largest possible, I think -- any larger and you may get memory allocation errors and it will fail), run this Terminal command:
diskutil erasevolume HFS+ "r1" `hdiutil attach -nomount ram://4661720`;
Divide the last number (which is the size of the disk in blocks) if you don't have enough RAM (keep at least 1GB free) left for the OS. Eject this disk like normal when finished and you will be good to go. To create two 2.2GB RAM disks, striped as a RAID volume (making the OS see 4.18GB as a RAM disk), use these commands:
$ diskutil erasevolume HFS+ "r1" `hdiutil attach -nomount ram://4661720`;
$ diskutil erasevolume HFS+ "r2" `hdiutil attach -nomount ram://4661720`;
$ diskutil createRAID stripe SpeedDisk HFS+ /Volumes/r1 /Volumes/r2;
When you're done, you can use Disk Utility to delete this RAID set. If you don't (i.e. you just eject them in the Finder), they will stay online and linger in RAM. (When you run top after ejecting the disks in the Finder, you'll see two diskimages processes in the list.) You could use kill -9 PID_of_disk_images to kill the processes, but I am not sure how cleanly this works.

If you know how to unmount RAID volumes in Terminal, and reverse this whole process and recover the ram, please post. I have a few methods using diskutil that destroy the RAID volume (separate it into its own disks), then unmount the stripes. This seems to work, but I don't want to submit it, as it's not fully tested. I also use kill -9 pid_number, which is a dirty method but works without a hitch (so far, at least).

[robg adds: I created a smaller 2GB striped RAID RAM disk as a test, using this site to convert GB to block size for the ram:// portion of the command. The test worked well, as did ejecting the disks via Disk Utility.

In researching this hint, I found this page, which has some interesting notes about RAM disks -- in this user's tests, a RAM disk was actually slower than a physical striped RAID, apparently because OS X caches disk I/O on RAM disks. This not only makes things slower, but means that a full RAM disk will require twice as much RAM as its size -- i.e. a full 2GB RAM disk will require 4GB of RAM.

On the linked page, there's also a downloadable set of scripts for creating and deleting RAM disks, which may provide a Terminal-based alternative to removing the RAID array using Disk Utility.]
  • Currently 1.89 / 5
  You rated: 2 / 5 (9 votes cast)

10.5: How to create a RAM disk larger than 2.2GB | 14 comments | Create New Account
Click here to return to the '10.5: How to create a RAM disk larger than 2.2GB' hint
The following comments are owned by whoever posted them. This site is not responsible for what they say.
10.5: How to create a RAM disk larger than 2.2GB
Authored by: mkuron on Feb 25, '09 08:19:30AM
Instead of creating a stripe RAID, I'd suggest creating a concat RAID. The difference is that on a striped RAID (RAID 1), the operating system has more work to do (send block 1 to disk 1, block 2 to disk 2, block 3 to disk 1, block 4 to disk 2, and so on) than on a concatenated RAID (send the first 2.2GB to the first disk and the second 2.2GB to the second disk). I am not sure if that'll solve the performance issue described by robg, but in theory, a concat RAID is faster than a stripe RAID.

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: ccjensen on Feb 25, '09 12:29:53PM
Would this help speed up transcoding with a utility like HandBrake?

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: mayathemacjedi on Mar 02, '09 08:38:13PM

Generally speaking, any computation you can do in ram exclusively will help immenseley. I have mounted whole ISO's and used OSeX for various transcoding operations and it does help. For streaming or doing screen capture sessions, ram disks help a lot as well. Generally if you are coming from the dvd to the HDD, the dvd will be so slow that going from a previously captured iso copied to ram, then transcoded to ram disk would be much faster. You will have to get the initial iso from the slower dvd, but once it is saved you can transcode much faster if it is coming from HDD or ram into the cpu. I would go from HDD to another HDD, or HDD to ram , or RAM to RAM to be the fastest. I am not sure though that the HDD would be the bottleneck in this case since processing speeds may not be fast enough to starve from the data flow of an HDD? that would be a good test to perform!

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: mayathemacjedi on Mar 02, '09 08:30:28PM

I thought I was logged in when I made the original post, but I guess I wasnt!

This is an EXCELLENT point that MKURON made about concatenation as opposed to striping and I didn't even think about that! Since soft RAID comes with some overhead as ROB_G stated (although not much cpu overhead was seen in my initial tests, I want to take a closer look now though). Concatenating would just basically use the secondary ram drive as "overflow" so to speak, and not having that soft stripe would squeeze that much more performance out of the solution. The reason for striping them was not for an increase in speed like one would get had they been multiple striped disks since physically it is coming from the same source [RAM]. Data flows as fast as it could when read/written to/from RAM and RAID stripes would not offer a benefit as the bus is maxed out presumably anyway. (Maybe some underground driver exists that can stripe paired dimms on the dual independent fsb's: ;0)) This soft RAID was done so that an application would be able to use the one larger logical disk for its caches and whatnot, and access it as one single larger volume (although slower due to the striping overhead mentioned by ROB_G). I will look into concatenating instead as it would be a LOT better of an option, and I think it will work. Some of the tests I ran included hashing out MD5's of large iso's, uncompressing archives to the same disk they are stored on (after copying to ram ran the same test on the ram disks), duplicating files, etc. Thanks for that insight ROB_G and MKURON and others, I will try to implement these ideas and see how far the rabbit hole goes!!


[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: mayathemacjedi on Mar 09, '09 01:55:35AM

The "concat" verb in place of "stripe" works, and it eliminates the soft raid bottlenecks. Thanks guys for the tips!

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: doneitner on Feb 25, '09 05:27:10PM

If I'm not mistaken, this would help in such a situation but only if the contents of the DVD are stored in the RAM disk -- thus cutting down read time on the DVD. However you're still going to need to read the DVD to put it into the RAM disk, so overall I'm not sure it's worth it. I have no real experience trying this, so maybe others will have more to offer.

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: dbs on Feb 25, '09 08:45:10PM

Not likely. Those processes are virtually always compute-limited, so speeding up disk access won't make a difference. (It's already cached with some read-ahead by the OS.)

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: applebit on Feb 25, '09 09:44:43PM

by far one of the best hints I've seen. Thanks for the info - I created a RAM disk for my ~/Lib/Caches and twice the speed now!

Jon McCullough
Systems Support Specialist

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: GeJe on Mar 01, '09 05:42:53AM

Looks great, but how can you tell Mac OS to use the RAM disk for caches ?
Thanks !

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: mayathemacjedi on Mar 02, '09 08:08:17PM

What program do you want to have access to the ram disk? I will look into the normal OS X caches using the ram disk (besides aliasing the folders and pointing them to the actual ram disk folders)

[ Reply to This | # ]
Striping vs Concatenation
Authored by: mayathemacjedi on Apr 29, '09 08:46:12PM

I finally got around to testing the ram RAID striping vs concatenation, and by far striping is faster. The supposed bottleneck of using striping seems to not be a problem in these tests, as current CPUs (and Dual memory busses in the G5 at least) seem to be able to handle it quite well with no real significant bottlenecks (at least when using 2 disks in the array. There is a bottleneck if more than 2 are used ). In Xbench 1.3 the disks scored as follows:

Concatenating 2 ram disks (4GB total):
Uncached Write == 159.39 MB/sec [4K blocks]
Uncached Write == 663.16 MB/sec [256K blocks]
Uncached Read == 50.68 MB/sec [4K blocks]
Uncached Read == 734.83 MB/sec [256K blocks]
Random 1989.25
Uncached Write == 100.79 MB/sec [4K blocks]
Uncached Write == 586.58 MB/sec [256K blocks]
Uncached Read == 51.45 MB/sec [4K blocks]
Uncached Read == 669.82 MB/sec [256K blocks]

Striping 2 ram disks:
Uncached Write == 197.69 MB/sec [4K blocks]
Uncached Write == 667.07 MB/sec [256K blocks]
Uncached Read == 52.49 MB/sec [4K blocks]
Uncached Read == 969.91 MB/sec [256K blocks]
Random 2146.64
Uncached Write == 95.03 MB/sec [4K blocks]
Uncached Write == 748.22 MB/sec [256K blocks]
Uncached Read == 49.81 MB/sec [4K blocks]
Uncached Read == 1035.02 MB/sec [256K blocks]

Interesting results. Anyone else got anything to submit along these lines?

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: Glitchtracker on Sep 25, '09 01:45:41AM

Use Snow leopard,
I just tested and was able to use 4.4 GB Ram disk

(of course you need a 64 bits able computer)

[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: maurizio.dececco on Aug 19, '10 02:52:10AM

Can the tests be repeated *without* a RAID configuration ?

The goal of a striped RAID configuration is to raise throughput by hiding the disk latency using parallelism;
essentially, while we wait for a disk to complete an operation, we start an operation on the second disk.

There is *no* latency on a RAM disk, of course, so, at least theoretically, there can be no advantage of a RAID configuration over a single RAM disk. All the visible differences are actually effects on the OS overhead of the configuration; the point to see is if the OS is able to cope with a RAM disk optimally (no need of async I/O, for example, data can be accessed immediately at CPU/Memory speed).


[ Reply to This | # ]
10.5: How to create a RAM disk larger than 2.2GB
Authored by: Harald Ge on Apr 15, '11 06:06:28AM
I use this script to create a 8,3GB ram disk every time my computer boots (after user login).
(Start Scripteditor and save it as a executable program, then add the programm as a user startup item. Although it can take up to 2 minutes until the raid is created)
I use a G5 (Late 2005) with very mixed ram (pc2 4200, 5300, 6400 etc).
It might be interesting to test this method with a raid 1 configuration. on top of the raid 0 conifguration.
This really speeds up my work, since Photoshop can address just 3GB of ram under 10.5.8.

Enjoy H.

Here is the script:

do shell script "
if ! test -e /Volumes/\"SpeedDisk\" ; then
diskutil erasevolume HFS+ r1 `hdiutil attach -nomount ram://4629672`
diskutil erasevolume HFS+ r2 `hdiutil attach -nomount ram://4629672`
diskutil erasevolume HFS+ r3 `hdiutil attach -nomount ram://4629672`
diskutil erasevolume HFS+ r4 `hdiutil attach -nomount ram://4629672`
diskutil createRAID stripe SpeedDisk HFS+ /Volumes/r1 /Volumes/r2 /Volumes/r3 /Volumes/r4;

XBench results:

Results 490.69
System Info
Xbench Version 1.3
System Version 10.5.8 (9L31a)
Physical RAM 14336 MB
Model PowerMac11,2
Processor PowerPC G5x2 @ 2.30 GHz
L1 Cache 64K (instruction), 32K (data)
L2 Cache 1024K @ 2.30 GHz
Bus Frequency 1 GHz
Drive Type SpeedDisk
Disk Test 490.69
Sequential 311.07
Uncached Write 292.70 179.71 MB/sec [4K blocks]
Uncached Write 625.50 353.91 MB/sec [256K blocks]
Uncached Read 147.69 43.22 MB/sec [4K blocks]
Uncached Read 932.28 468.56 MB/sec [256K blocks]
Random 1161.13
Uncached Write 439.26 46.50 MB/sec [4K blocks]
Uncached Write 1590.40 509.15 MB/sec [256K blocks]
Uncached Read 6071.25 43.02 MB/sec [4K blocks]
Uncached Read 2667.39 494.95 MB/sec [256K blocks]

[ Reply to This | # ]