icon sc-linkedinlogo of codepen-iconlogo of github-iconyoutube play button

notes by Adam Sullovey

web & mobile application developer
practicing in Toronto, ON

Node.js, "too many open files", and ulimit

Your WebSocket-enabled Node.js backend suddenly logged out this error:

Error: EMFILE, Too many open files

Here’s what I learned when I ran into this error hosting a Node.js app using WebSockets on Ubuntu.

But my app doesn’t read or write files to the file system!

The word ‘files’ is misleading. The problem really is that there are too many ‘file descriptors’ open. A file descriptor can be a lot of things, including a handle to an input or output resource, such as a network connection. If you are serving HTTP requests with HTTP responses, these file descriptors open and close quickly as requests are served with responses and connections from your server to your clients are opened, responsed to, and close down. You might never reach the limit and see this error. However, if you open a WebSocket connection for each user who stays on the site, you’ll find the count of file descriptors going up and up when visitors stay on your site. The WebSocket connection stays open for much longer than the HTTP request/response connection, so the network connections (and open file descriptors) accumulate.

Visualizing your open file descriptors

You can get the total count of open file descriptors with:

$ lsof | wc -l

(lsof lists open files, wc -l counts the lines)

If you want to narrow this down to just 1 Node.js process’ open file descriptors, add a filter. Find your Node.js process’ PID

$ ps aux | grep node
adam  14980 0.8  0.8 126212 24944  pts/1 Sl+  12:29 0:02 node server.js

(ps lists processes, grep filters them)

I know the second column is my PID. I can use that to filter lsof with -p:

$ lsof -p 14980
node    14980 adam  cwd    DIR    8,6     4096 4597208 /home/adam/Documents/socketio-demos/chat_after 
node    14980 adam  rtd    DIR    8,6     4096       2 / 
node    14980 adam  txt    REG    8,6 31604723 4855676 /home/adam/.nvm/versions/node/v7.0.0/bin/node 
node    14980 adam  mem    REG    8,6  1742312 4587818 /lib/i386-linux-gnu/libc-2.15.so 
node    14980 adam  mem    REG    8,6   124663 4588543 /lib/i386-linux-gnu/libpthread-2.15.so 
node    14980 adam  mem    REG    8,6   116232 4588449 /lib/i386-linux-gnu/libgcc_s.so.1 
node    14980 adam  mem    REG    8,6   173576 4587822 /lib/i386-linux-gnu/libm-2.15.so 
node    14980 adam  mem    REG    8,6   905712 7345110 /usr/lib/i386-linux-gnu/libstdc++.so.6.0.16
node    14980 adam  mem    REG    8,6    30684 4587807 /lib/i386-linux-gnu/librt-2.15.so 
node    14980 adam  mem    REG    8,6    13940 4587815 /lib/i386-linux-gnu/libdl-2.15.so 
node    14980 adam  mem    REG    8,6   134344 4588554 /lib/i386-linux-gnu/ld-2.15.so 
node    14980 adam    0u   CHR  136,1      0t0       4 /dev/pts/1 
node    14980 adam    1u   CHR  136,1      0t0       4 /dev/pts/1 
node    14980 adam    2u   CHR  136,1      0t0       4 /dev/pts/1 
node    14980 adam    3r  FIFO    0,8      0t0  117628 pipe 
node    14980 adam    4w  FIFO    0,8      0t0  117628 pipe 
node    14980 adam    5u  0000    0,9        0    6531 anon_inode 
node    14980 adam    6r  FIFO    0,8      0t0  117629 pipe
node    14980 adam    7w  FIFO    0,8      0t0  117629 pipe 
node    14980 adam    8u  0000    0,9        0    6531 anon_inode 
node    14980 adam    9u   CHR  136,1      0t0       4 /dev/pts/1 
node    14980 adam   10r   CHR    1,3      0t0    1056 /dev/null 
node    14980 adam   11u  IPv6 119080      0t0     TCP *:http-alt (LISTEN) 
node    14980 adam   12u   CHR  136,1      0t0       4 /dev/pts/1 
node    14980 adam   15u  IPv6 122405      0t0     TCP adam-m1330.local:http-alt->Adams-MacBook-Pro.local:64982 (ESTABLISHED) 
node    14980 adam   23u  IPv6 119894      0t0     TCP localhost:http-alt->localhost:38218 (ESTABLISHED) 
node    14980 adam   24u  IPv6 119436      0t0     TCP localhost:http-alt->localhost:38219 (ESTABLISHED)

These are all the open file descriptors for my Node.js process (the chat_after demo from this socketio-demos repo). See the last 3 lines with a TYPE of IPv6? Those are file descriptors for 3 open web socket connections from 3 web browsers connected to a Node.js chatroom running on my M1330 laptop.

  • adam-m1330.local:http-alt->Adams-MacBook.local:64982 is connecting from a MacBook in the same network
  • localhost:http-alt->localhost:38218 and line after are two browser tabs on the same laptop as the server

Linux and Open File (Descriptor) Limits

And like all things that can be counted, there are limits in place to prevent systems from overloading themselves and crashing.

User specific file limits

Linux puts limits on the amount of files a user (like the one who executes the Node.js process) can have open at once. Low limits can be good to keep one user from hogging too many resources on a server shared by many users. However, if you are deploying servers dedicated to running Node.js processes, you can safely raise this limit and give Node.js access to more resources.

You can see the limit with the command ulimit -n:

$ ulimit -n

To edit this limit, add new lines to the file /etc/security/limits.conf.

(if you want to all other user specific limits you can configure in this file, run ulimit -a)

soft nofile 10000
hard nofile 10000

(the different between hard and soft limits isn’t important in the context of a Node.js server, but I like this explanation if you are curious about it.)

The exact number is up to you. Consider:

  • How many simultanous WebSocket connections do you need to support on this server for this user?
  • How many can your application actually handle?
  • Are there other users on this machine that need some capacity reserved for them as well?

Restart the machine, and run ulimit -n to see if the value has stuck.

Operating system limits

Ubuntu has an operating system wide limit on file descriptors as well. You can view it with the command sysctl -a | grep fs.file-max:

$ sysctl -a | grep fs.file-max
fs.file-max = 308115

If you need to raise this as well, edit the file /etc/sysctl.conf and add a new line with the new value, like this:

fs.file-max = 40000

Again, consider how high a limit you actually need, and what your server can handle.

To apply changes, run:

$ sysctl -p

(-p will load the settings from /etc/sysctl.conf into memory)

Pages I learned this stuff from:

Also, I wrote a bit about open file descriptor counts in 2014.