{
  "WorkItem": {
    "AffectedComponent": {
      "Name": "",
      "DisplayName": ""
    },
    "ClosedComment": "fixed in change set 25127.",
    "ClosedDate": "2008-10-08T01:50:07.873-07:00",
    "CommentCount": 0,
    "Custom": null,
    "Description": "DotNetZip does not handle large files well.\nThe approach used writes the compressed file data into a MemoryStream, which implies that for each entry, all of the file data for that entry is held in memory at a single time.  For very large files this can be prohibitively expensive in terms of memory consumption.  In extreme cases, it can result in out-of-memory exceptions, with this kind of stack trace:\n \nException of type 'System.OutOfMemoryException' was thrown. (System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.\n   at System.IO.MemoryStream.set_Capacity(Int32 value)\n   at System.IO.MemoryStream.EnsureCapacity(Int32 value)\n   at System.IO.MemoryStream.Write(Byte[] buffer, Int32 offset, Int32 count)\n   at System.IO.Compression.DeflateStream.InternalWrite(Byte[] array, Int32 offset, Int32 count, Boolean isAsync)\n   at System.IO.Compression.DeflateStream.Write(Byte[] array, Int32 offset, Int32 count)\n   at Ionic.Utils.Zip.CRC32.GetCrc32AndCopy(Stream input, Stream output)\n   at Ionic.Utils.Zip.ZipEntry.WriteHeader(Stream s, Byte[] bytes)\n   at Ionic.Utils.Zip.ZipEntry.Write(Stream outstream)\n   at Ionic.Utils.Zip.ZipFile.Save()\n   at Ionic.Utils.Zip.ZipFile.Save(String ZipFileName)\n \nWith a more intelligent approach, it need not be so expensive.",
    "LastUpdatedDate": "2013-05-16T05:32:45.753-07:00",
    "PlannedForRelease": "",
    "ReleaseVisibleToPublic": false,
    "Priority": {
      "Name": "Low",
      "Severity": 50,
      "Id": 1
    },
    "ProjectName": "DotNetZip",
    "ReportedDate": "2008-05-28T18:11:22.93-07:00",
    "Status": {
      "Name": "Closed",
      "Id": 4
    },
    "ReasonClosed": {
      "Name": "Unassigned"
    },
    "Summary": "Save fails at large result file",
    "Type": {
      "Name": "Issue",
      "Id": 3
    },
    "VoteCount": 2,
    "Id": 5028
  },
  "FileAttachments": [],
  "Comments": [
    {
      "Message": "I'll have to look at this!",
      "PostedDate": "2008-05-29T10:05:11.077-07:00",
      "Id": -2147483648
    },
    {
      "Message": "I tried this on my machine and did not see it fail?  \r\nI will test it further.  Without being able to reproduce it.... I won't be able to fix it. \r\n",
      "PostedDate": "2008-05-30T17:30:36.943-07:00",
      "Id": -2147483648
    },
    {
      "Message": "I have never been able to reproduce this. . . ",
      "PostedDate": "2008-07-16T09:06:10.947-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2008-08-15T18:59:44.3-07:00",
      "Id": -2147483648
    },
    {
      "Message": "closing as \"won't fix\".  If you have a test case that reproduces this situation, please open a new workitem and post the test case.\r\n\r\n** Closed by Cheeso 8/15/2008 6:59 PM",
      "PostedDate": "2008-10-06T17:58:43.577-07:00",
      "Id": -2147483648
    },
    {
      "Message": "Re-opening. this is a design feature.  It is possible to code around it. \r\n",
      "PostedDate": "2008-10-06T17:58:44.043-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2008-10-06T18:01:53-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2008-10-06T18:03:27.887-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2008-10-07T07:43:42.353-07:00",
      "Id": -2147483648
    },
    {
      "Message": "I've made a significant change to the handling of entries when saving them to a zipfile, to correct this problem.   In re-working the library to stream the file data, it no longer has to keep all of the data in memory at one time.  The file data is read in, compressed and (maybe) encrypted, and hen written out to the zip archive, via streams.   This is much more memory efficient.  The drawback is that files need to be scanned twice, in the case where encryption is used; this can significantly slow down the overall process for very large files.   I haven't done ay performance measurements. \r\n\r\n\r\n\r\n",
      "PostedDate": "2008-10-08T01:49:48.687-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2008-10-08T01:50:07.873-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2013-02-21T18:44:49.967-08:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2013-05-16T05:32:45.753-07:00",
      "Id": -2147483648
    }
  ]
}