[
  {
    "Id": "95937",
    "ThreadId": "28768",
    "Html": "<div><span style=\"font-size:13px;font-family:arial\">Peter Rorlach wrote:<br>\r\n<br>\r\n<div>\r\n<p><span>Hello,</span></p>\r\n<p>What you created here is a very useful library indeed. And while it would have been neat if the samples in the CHM also included VB examples, I can usually figure out the code.</p>\r\n<p>However, I still get this exception thrown when I try to use the library on larger files. Example:  folder contains 64 files, totalling 2.2GB. This should become a single archive but the exception “out of memory” is thrown every time I attempt this (full error text below). Happens with other folders containing other file types as well. It appears to be strictly a matter of the total size, and it happens at the .Save stage.</p>\r\n<p>I don’t know how to reserve a segment of memory just for this library from within the application. Here are my specs, and I believe they should not run into this type of problem:</p>\r\n<p>CPU: Intel Dual Core 2.4GHz<br>\r\nRAM: 2GB<br>\r\nHD: 0.8 TB free disk space<br>\r\nOS: Windows Vista Premium<br>\r\nPaging: 4GB in two paging files<br>\r\nVS 2005 using VB<br>\r\nLibrary version 1.5 preview (although the error also occurred with version 1.4.3)</p>\r\n<p>I did raise the issue on-site but did not see any response to it yet. </p>\r\n<p>Thank you,</p>\r\n<p>With best regards, </p>\r\n<p><i><span style=\"font-size:10.5pt\">Dr. Peter Rorlach,<br>\r\n</span></i><i><span style=\"font-size:10.5pt\">Lead Technical Author/Project Manager<br>\r\n</span></i><i><span style=\"font-size:10.5pt\">Brussels, Belgium</span></i></p>\r\n<p> </p>\r\n<p>Exception Message:</p>\r\n<p>30/05/2008 - 05:36           Plugins Backup failed..: Exception of type 'System.OutOfMemoryException' was thrown. (System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.<br>\r\n   at System.IO.MemoryStream.set_Capacity(Int32 value)<br>\r\n   at System.IO.MemoryStream.EnsureCapacity(Int32 value)<br>\r\n   at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)<br>\r\n   at System.IO.Compression.DeflateStream.InternalWrite(Byte[] array, Int32 offset, Int32 count, Boolean isAsync)<br>\r\n   at System.IO.Compression.DeflateStream.Write(Byte[] array, Int32 offset, Int32 count)<br>\r\n   at Ionic.Utils.Zip.CRC32.GetCrc32AndCopy(Stream input, Stream output)<br>\r\n   at Ionic.Utils.Zip.ZipEntry.WriteHeader(Stream s, Byte[] bytes)<br>\r\n   at Ionic.Utils.Zip.ZipEntry.Write(Stream outstream)<br>\r\n   at Ionic.Utils.Zip.ZipFile.Save()<br>\r\n   at Ionic.Utils.Zip.ZipFile.Save(String ZipFileName)<br>\r\n   at DEPSeek.frmDepSeek.zipAll(String wFolder, String wExt, String wTitle) in D:\\HandsOn\\Development\\VS2005-UWP2\\DepSearch\\DEPSeek\\DEPSeek\\frmMain.vb:line 1009) (Dr Peter Rorlach)</p>\r\n</div>\r\n</span></div>\r\n",
    "PostedDate": "2008-05-30T18:22:25.14-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "95938",
    "ThreadId": "28768",
    "Html": "<span style=\"font-size:13px;font-family:arial\">I did not see any discussion item on this. </span>\r\n<div><span style=\"font-size:13px;font-family:arial\">I did see an &quot;Issue&quot; reported but no clear instructions on how to reproduce it. </span></div>\r\n<div><span style=\"font-size:13px;font-family:arial\"></span> </div>\r\n<div><span style=\"font-size:13px;font-family:arial\">I've been testing this all afternoon; I have not been able to reproduce the problem you reported.<br>\r\nMaybe you can do some additional investigation? <br>\r\nAt what point does the problem occur?  2.2GB exactly?  What if you zipped up an archive of 2.0GB? Does that work? <br>\r\n<br>\r\nDoes the problem happen if you use the command line utilities I include in the v1.5 release to zip up the files? <br>\r\n<br>\r\nDoes it fail always on the same file?  <br>\r\nWhat is the format of those files?  are they already compressed?  If so have you tried the ForceNoCompression flag? <br>\r\n<br>\r\netc<br>\r\netc</span></div>\r\n",
    "PostedDate": "2008-05-30T18:25:29.207-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "96968",
    "ThreadId": "28768",
    "Html": "Sorry to have taken so long to get back to you.<br>\r\nHere's where I am with this: I've tried verious different file types with varying sizes and once I get near or over 2 GB in the total original size the same error occurs. I forgot to add that my current .NET version is 3.5. And yes, I used both the stable release 1.4.3, and the preview release.<br>\r\nI have not tried it with the ForceNoCompression flag since that seems to defeat the purpose of a ZIP archive - then I might as well copy the files as they are. And no, the files have not been compressed. <br>\r\nI did notice, however, that some files end up larger inside the archive, thus I am wondering if that affects the error. And of course why that would be? I do realize that some file types simply cannot be compressed but a text or HTML file should end up smaller since both consist largely of white space.<br>\r\n<br>\r\nBTW: if an extracted file already exists (version 1.5), even with the overwrite flag set to true, an exception (Access Denied) is thrown - I do have full admin access to the machine this is running on. This I can work around, so it is no big deal. (Ignore this - it only happens in Vista if the explore is still pointed at that file and if that file can be previewed, such as a html, txt, or graphics file)<br>\r\n<br>\r\nThanks for your efforts.<br>\r\n<br>\r\nPeter R.<br>\r\n",
    "PostedDate": "2008-06-05T04:59:47.623-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "97029",
    "ThreadId": "28768",
    "Html": "Hey Doc, <br>\r\n<br>\r\nOn the &quot;Access Denied&quot;  problem, by default the Extract() methods do not overwrite existing files.  There are extraction methods and properties on the ZipEntry dealing with overwriting existing files.   The doc is not clear on this - the doc for the Extract() methods that do not accept an Overwrite flag should explicitly state &quot;existing entries will not be overwritten&quot;.  I am changing that now. <br>\r\n<br>\r\n<br>\r\nOn the compression anomalies - would you be willing to share your data?  I don't know how to reproduce it. I've tried but no joy.<br>\r\n<ul>\r\n    <li>when I compress previously compressed data, I get the same-or-smaller sizes.  There is logic in the library to insure this. I don;t know why yours would ever expand. </li>\r\n    <li>I went to the 2gb threshold and beyond.  Never saw the problem you are reporting. </li>\r\n</ul>\r\n<p>Maybe if I had your files I could see the problem.<br>\r\ncan you load them up to skydrive or someplace like that? <br>\r\n<br>\r\nof course it could be private data in which case, we'll have to think of something else.</p>\r\n",
    "PostedDate": "2008-06-05T08:18:56.73-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "97044",
    "ThreadId": "28768",
    "Html": "<p>Thanks, Cheeso.</p>\r\n<p>&nbsp;</p>\r\n<p>As I've edited (perhaps too late) the &quot;Access Denied&quot; is really a problem with the way Vista retains inconsistencies already present in XP's Explorer - no matter what you select in Folder Options, Vista goes its own way. One of the many reasons it will soon disappear from my drives.</p>\r\n<p>As for the data sharing, I've no problem with that, these are simply game files (SC4 - I am writing an organisational tool for it, for my own use). These are DAT typed files, the content of which is largely text I don't know &quot;skydrive&quot; but will look into it to see if this can be done&quot; I am sure you have more pressing priorities, and meanwhile I can experiment some more. There other problems I am having, but I am certain they are due to my misunderstanding something or other.<br>\r\n<br>\r\nOne suggestion, though: .entryfilenames can be ennumerated via an integer; .item cannot - it always requires the actual filename, which in case of folder or empty names seems to run into problems. Would it be possible - in the next release version - to permit .item(integer) as well?<br>\r\n<br>\r\nThanks for your time,<br>\r\n<br>\r\nPeter</p>\r\n<div style=\"border-right:medium none;padding-right:0.2em;border-top:#aaa 0.1em dotted;padding-left:0.2em;padding-bottom:0.2em;margin:1em 0em 2.5em 3em;border-left:medium none;padding-top:0.2em;border-bottom:#aaa 0.1em dotted;font-style:italic\"><br>\r\nCheeso wrote:<br>\r\nHey Doc, <br>\r\n<br>\r\nOn the &quot;Access Denied&quot;  problem, by default the Extract() methods do not overwrite existing files.  There are extraction methods and properties on the ZipEntry dealing with overwriting existing files.   The doc is not clear on this - the doc for the Extract() methods that do not accept an Overwrite flag should explicitly state &quot;existing entries will not be overwritten&quot;.  I am changing that now. <br>\r\n<br>\r\n<br>\r\nOn the compression anomalies - would you be willing to share your data?  I don't know how to reproduce it. I've tried but no joy.<br>\r\n<ul>\r\n    <li>when I compress previously compressed data, I get the same-or-smaller sizes.  There is logic in the library to insure this. I don;t know why yours would ever expand. </li>\r\n    <li>I went to the 2gb threshold and beyond.  Never saw the problem you are reporting. </li>\r\n</ul>\r\n<p>Maybe if I had your files I could see the problem.<br>\r\ncan you load them up to skydrive or someplace like that? <br>\r\n<br>\r\nof course it could be private data in which case, we'll have to think of something else.</p>\r\n<br>\r\n<br>\r\n</div>\r\n<br>\r\n",
    "PostedDate": "2008-06-05T09:12:10.113-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "97128",
    "ThreadId": "28768",
    "Html": "ok.<br>\r\non skydrive - let me know if / when you can post your files somewhere.  Skydrive is just a free file-sharing spot.  you sign up and can post up to 50 gb I think. There are lots of other options .<br>\r\n<br>\r\non the int-based enumeration - sounds like a reasonable request.  Originally I thought that enumerating by integer index would nto be useful, but maybe it is. <br>\r\nBefore you open a Work Item for this, could you post some code that shows what you want to do?  I don't really get the problem you might be having in the case of folders or &quot;empty names&quot; - I'm not even sure what it means to have an &quot;empty name&quot; for an entry in a zip file.   So i'd like to better understand the thing you're doing, first. <br>\r\n<br>\r\n",
    "PostedDate": "2008-06-05T15:08:03.153-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "97338",
    "ThreadId": "28768",
    "Html": "Sorry, did not see this earlier. Willco on Sunday - weekends here tend to leave me little time for coding.\r\n",
    "PostedDate": "2008-06-06T12:26:59.537-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "97485",
    "ThreadId": "28768",
    "Html": "Ok, mysterie solved: the problem is not with your library but with the ZIP format altogether: I've used other utilities to receive the same error, just differently explained: individual file size of files to be comressed:<br>\r\n<br>\r\n&quot;..\\Plugins.zip: Files size is too large for ZIP archive. Use RAR instead&quot;<br>\r\n<br>\r\nThis will not happen when you use the Windows Explorer's built-in compression because the resulting &quot;ZIP&quot; file isn't really a ZIP. Here's a list of the files:<br>\r\n06/06/2008  14:12        74,852,774 CAM.dat<br>\r\n06/06/2008  14:19        77,758,731 PEGPROD.dat<br>\r\n06/06/2008  14:23       111,460,670 Urban.dat<br>\r\n06/06/2008  14:22       127,318,807 Simgoober.dat<br>\r\n06/06/2008  21:38       136,230,834 Eye Candy.dat<br>\r\n06/06/2008  14:18       227,996,297 NDEX.dat<br>\r\n06/06/2008  14:12       930,574,697 BSC.dat<br>\r\n<br>\r\nCertainly the last one, possibly the penultimate will trigger this problem. Now I wonder if there's a way around it..<br>\r\n",
    "PostedDate": "2008-06-08T03:34:55.89-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "98047",
    "ThreadId": "28768",
    "Html": "Sorry but the &quot;mystery solved&quot; was permature. I've reduced the file sizes below 500MB, and even added a check that every file above 250MB uses the FoceNoCompression flag, but I still get the same error during save: out of memory, since the total ZIP file would exceed 2GB. The error always happens during the save - never during the acual archiving.<br>\r\n<br>\r\nI am at a loss..\r\n",
    "PostedDate": "2008-06-10T16:18:27.477-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123212",
    "ThreadId": "28768",
    "Html": "I am having this same issue when trying to archive a file of ~800MB.<br>\r\n<br>\r\nThis is my code<br>\r\n<br>\r\n<hr>\r\n<span style=\"font-size:13px;color:#008000\"><span style=\"font-size:13px;color:#008000\">// select only the file names that match current suffix<br>\r\n</span></span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">foreach</span></span><span style=\"font-size:13px\"> (</span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">String</span></span><span style=\"font-size:13px\"> file </span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">in</span></span><span style=\"font-size:13px\"> (</span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">from</span></span><span style=\"font-size:13px\"> f </span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">in</span></span><span style=\"font-size:13px\"> fileNames<br>\r\n</span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; where</span></span><span style=\"font-size:13px\"> f.Contains(sfx)<br>\r\n</span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; select</span></span><span style=\"font-size:13px\"> f))<br>\r\n{<br>\r\n</span><span style=\"font-size:13px;color:#008000\"><span style=\"font-size:13px;color:#008000\">&nbsp;&nbsp;&nbsp; // add to zip<br>\r\n</span></span><span style=\"font-size:13px\">&nbsp;&nbsp;&nbsp; zf.AddFileStream(</span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">Path</span></span><span style=\"font-size:13px\">.GetFileName(file), </span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">String</span></span><span style=\"font-size:13px\">.Empty, </span><span style=\"font-size:13px;color:#0000ff\"><span style=\"font-size:13px;color:#0000ff\">new</span></span><span style=\"font-size:13px\"> </span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">FileStream</span></span><span style=\"font-size:13px\">(file, </span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">FileMode</span></span><span style=\"font-size:13px\">.Open));<br>\r\n</span><span style=\"font-size:13px\">}\r\n<p>&nbsp;</p>\r\n</span>\r\n<p><span style=\"font-size:13px;color:#008000\"><span style=\"font-size:13px;color:#008000\">// save the zip file<br>\r\n</span></span><span style=\"font-size:13px\">zf.Save(</span><span style=\"font-size:13px;color:#2b91af\"><span style=\"font-size:13px;color:#2b91af\">Path</span></span><span style=\"font-size:13px\">.Combine(outputDirectory, CalculateZipName(sfx)));<br>\r\n<p>&nbsp;</p>\r\n<hr>\r\n<br>\r\n<br>\r\nThe exception always happens on the last line, and the trace is the same as was posted above.&nbsp;Sadly I cannot share my data; but I would like to assit in fixing this issue.<br>\r\n</span>\r\n<p>On small files, this same code works great.</p>\r\n",
    "PostedDate": "2008-10-06T08:58:42.61-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123241",
    "ThreadId": "28768",
    "Html": "I have been able to track this down to this method: CRC32.GetCrc32AndCopy() when being called from within the ZipEntry.WriteHeader() method.<br>\r\n<br>\r\nIt is inside the While loop in GetCrc32AndCopy that the process starts to eat memory (~800MB in my case, roughly equilivent to my file size). <br>\r\n<br>\r\nI hope this can help track down this issue!\r\n",
    "PostedDate": "2008-10-06T11:24:46.69-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123357",
    "ThreadId": "28768",
    "Html": "ok, let me look. <br>\r\nI have never been able to get the problem to occur.<br>\r\n<br>\r\n-----<br>\r\nupdate: Hmmm, I can see this is a design &quot;feature&quot;.&nbsp;As a file is compressed, the data is written to a memorystream.&nbsp; Everything is kept in memory. Actually, this is true whether or not the data is compressed.&nbsp; (Eg, even if ForceNoCompression is True, then you still get all the file data in memory at one time, for each entry added to the zipfile).&nbsp; Bottom line:&nbsp; all of the file data for an entry is kept in memory at one time, and for very large files this can lead to out-of-memory errors.&nbsp;<br>\r\n<br>\r\nWhat is required is that the data be written to the file or output stream as it is compressed. <br>\r\n<br>\r\nCurrently the approach is a bit naive.\r\n",
    "PostedDate": "2008-10-06T17:15:10.003-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123364",
    "ThreadId": "28768",
    "Html": "fuzzerd, thanks for bringing this up.&nbsp; <br>\r\nI have re-opened workitem 5028 to fix this problem.<br>\r\n<a href=\"http://www.codeplex.com/DotNetZip/WorkItem/View.aspx?WorkItemId=5028\">http://www.codeplex.com/DotNetZip/WorkItem/View.aspx?WorkItemId=5028</a>\r\n",
    "PostedDate": "2008-10-06T18:02:38.673-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123475",
    "ThreadId": "28768",
    "Html": "Do you see this being a bugfix release to v1.5 or just v1.6?\r\n",
    "PostedDate": "2008-10-07T07:45:59.613-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123482",
    "ThreadId": "28768",
    "Html": "<div><span style=\"font-size:13px;font-family:Arial\">Hmm, it's an architectural change in how the zip engine works. </span></div>\r\n<div><span style=\"font-size:13px;font-family:Arial\">It would definitely qualify as a high-impact change.&nbsp; </span></div>\r\n<div><span style=\"font-size:13px;font-family:Arial\">Given that, it would make sense to put it in the next major release.<br>\r\n<br>\r\nwhy?<br>\r\nwhy do you ask? <br>\r\n</span></div>\r\n",
    "PostedDate": "2008-10-07T08:08:02.197-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123495",
    "ThreadId": "28768",
    "Html": "I was just trying to get a time frame for this fix, because of this issue I've been forced to switch to SharpZipLib ( a curse word arond here I know ) but that library does not eat memory on large files, but it does not offer the same ease of use and it's output files are 20% larger than yours.\r\n",
    "PostedDate": "2008-10-07T08:37:38.967-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123646",
    "ThreadId": "28768",
    "Html": "The timing I think is independent of the version number. <br>\r\nI wouldn't want to put it in v1.5, only because it's a fundamental change, definitely not a 2-line bugfix.<br>\r\n<br>\r\nI am testing it now with v1.6. <br>\r\nBut I never had a test case to make it break, so I will need you to verify that it works fo ryou. <br>\r\n",
    "PostedDate": "2008-10-07T18:14:01.227-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123652",
    "ThreadId": "28768",
    "Html": "Post a message here when the 1.6 release with these changes is available and I'll test it out as soon as possible.\r\n",
    "PostedDate": "2008-10-07T18:45:27.187-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123702",
    "ThreadId": "28768",
    "Html": "ok, try the latest v1.6 prelim release:<br>\r\n<a href=\"http://www.codeplex.com/DotNetZip/Release/ProjectReleases.aspx?ReleaseId=14569\">http://www.codeplex.com/DotNetZip/Release/ProjectReleases.aspx?ReleaseId=14569</a>&nbsp;\r\n",
    "PostedDate": "2008-10-08T01:55:13.557-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  },
  {
    "Id": "123782",
    "ThreadId": "28768",
    "Html": "This appears to have fixed my issue, my files are now zipping up great. Memory usage is constant over the entire execution.\r\n",
    "PostedDate": "2008-10-08T08:44:56.86-07:00",
    "UserRole": null,
    "MarkedAsAnswerDate": null
  }
]