How to fix java.lang.OutOfMemoryError: Java heap space error?

31k Views Asked by At

I have a file with size of 32 MB, I have downloaded it from DocuShare server to DocuShare temp folder. I am trying to read the file content from it to create a file. I get error when I URL encode my base64 content. I am not getting any exception when I run the same code a simple java application. But when I use the same code in DocuShare service to get document content I get Exception. HTTP Status 500 - org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space

org.glassfish.jersey.server.ContainerException: java.lang.OutOfMemoryError: Java heap space

File file = new File(filePath);
FileInputStream fileInputStreamReader = new FileInputStream(file);
byte[] bytes = new byte[(int)file.length()];
fileInputStreamReader.read(bytes);
String encodedBase64 = String encodedBase64 = java.util.Base64.getEncoder().encodeToString(bytes);
String urlEncoded = URLEncoder.encode(encodedBase64);

How to fix this error? Do I need to increase my tomcat heap size?

3

There are 3 best solutions below

0
Pavel Smirnov On BEST ANSWER

Base64 converts each 3 bytes into 4 letters. That means you can read your data in chunks and decode it in the same way as you would decode the whole file.

Try this:

       File file = new File(filePath);
       FileInputStream fileInputStreamReader = new FileInputStream(file);
       StringBuilder sb = new StringBuilder();
       Base64.Encoder encoder = java.util.Base64.getEncoder();
       int bufferSize = 3 * 1024; //3 mb is the size of a chunk
       byte[] bytes = new byte[bufferSize]; 
       int readSize = 0;

       while ((readSize = fileInputStreamReader.read(bytes)) == bufferSize) {
            sb.append(encoder.encodeToString(bytes));
       }

       if (readSize > 0) {
            bytes = Arrays.copyOf(bytes, readSize);
            sb.append(encoder.encodeToString(bytes) );
       }

       String encodedBase64  = sb.toString();
0
vavasthi On

If you have large files, you will always run into OOM errors depending on size of file. If your goal is to base64 encoding using Apache Commons Base64 Streams.

https://commons.apache.org/proper/commons-codec/apidocs/org/apache/commons/codec/binary/Base64InputStream.html

6
Svetlin Zarev On

There are two ways in which you can fix the issue.

  1. You can increase the heap size, but IMO this is a bad solution, because you will hit the same issue if you get several parallel requests or when you try to process a bigger file.

  2. You can optimize your algorithm - instead of storing several copies of your file in-memory, you can process it in a streaming fashion, thus not holding more than several KBs in memory:

    import java.io.InputStream;
    import java.io.OutputStream;
    import java.nio.file.Files;
    import java.nio.file.Path;
    import java.nio.file.Paths;
    import java.util.Base64;
    
    public class Launcher {
        public static void main(String[] args) throws Exception {
            final Path input = Paths.get("example");
            final Path output = Paths.get("output");
    
            try (InputStream in = Files.newInputStream(input); OutputStream out = Base64.getUrlEncoder().wrap(Files.newOutputStream(output))) {
                final byte[] buffer = new byte[1024 * 8];
    
                for (int read = in.read(buffer); read > 0; read = in.read(buffer)) {
                    out.write(buffer, 0, read);
                }
            }
        }
    }
    

PS: If you really need the URL encoder, you'll have to create a streaming version of it, but I think a URL-safe base64 would be more than enough