I am writing a C# program that generates a fairly large PDF document. And the number of pages in this PDF exceeds 1000 pages. The entire process needs to be done dynamically based on a large dataset from a local file. Each PDF page contains text and image information.
My program will crash during its operation. When the pages of this new PDF reaches a certain number, my program will crash. So I doubt whether it is a memory limitation of some sort. Is this a bug in the library? How can I release the completed PDF from memory? Does anyone have any suggestion on how to solve this problem?
This article on XsPDF.com is what you looking for, check out: Create Large PDF File in C#.
Answers
From what i know, your program might taking up one GIGABYTE of memory and it seems like an awfully large memory footprint. So i guess whether you can separate the whole source to small ones, create individual small PDF files, and then merge them all.
It is hard to tell the solution. But i don't think there's a memory leak in the library. It keeps the complete PDF file in memory and only used items will finally be written to the PDF file. Theoretically, the memory should be free after closing PDF file. Certainly, the process will not shrink immediately. Free memory might be re-used and process size will not change much while creating PDF files.
I tried to optimize c# code a bit better by using garbage collector. But my program crashed and got the same exception when PDF document is big (more than 90 MB, for example).
The most applicable method for this case is to change the technique you're using to produce smaller documents individually. Finally, combine them together for new PDF document.