{
  "WorkItem": {
    "AffectedComponent": {
      "Name": "",
      "DisplayName": ""
    },
    "ClosedComment": "",
    "ClosedDate": null,
    "CommentCount": 0,
    "Custom": null,
    "Description": "Implement a zip merge capability, to allow an application to move a raw ZipEntry from one ZipFile to another, without need for decompressing and recompressing, decrypting and encrypting.  \n \nIt might be as simple as a single new public method: \n  public ZipEntry ZipFile.ImportEntry(ZipEntry) \n \nThere would need to be a new internal property on the ZipEntry\n  internal stream ZipEntry.RawEntry { get ; }\n \nwhich would provide the raw bytes associated to the entry. \n \nAlthough, it's possible we might want to make  that property public.  \n \nThe ZipFile.ImportEntry function would have to create a new ZipEntry, then mark the source of the entry as \"ImportedEntry\" or something, a new enum member in ZipEntrySource.   When the entry has that as its source, when saving / writing the Entry, we'd need to read directly from the RawEntry stream and write directly to the zipfile WriteStream.  \n \nWould need to do the right thing for the SaveProgress method.  \n \nNot sure how valuable this would be.",
    "LastUpdatedDate": "2014-09-05T08:16:28.213-07:00",
    "PlannedForRelease": "v2.0 - planning",
    "ReleaseVisibleToPublic": true,
    "Priority": {
      "Name": "Medium",
      "Severity": 100,
      "Id": 2
    },
    "ProjectName": "DotNetZip",
    "ReportedDate": "2009-06-16T04:59:35.397-07:00",
    "Status": {
      "Name": "Proposed",
      "Id": 1
    },
    "ReasonClosed": {
      "Name": "Unassigned"
    },
    "Summary": "Implement a Zip merge capability",
    "Type": {
      "Name": "Feature",
      "Id": 1
    },
    "VoteCount": 12,
    "Id": 7896
  },
  "FileAttachments": [],
  "Comments": [
    {
      "Message": "It would definitely have some value. For example I would like to store some files in a database, in a compressed form (possibly gzip-ped), and then build up packages (zip files) in an ASP.Net application, by putting different combinations of those files into a package. With having this possibility I would not need to decompress the files, and then compress them again.",
      "PostedDate": "2009-08-14T07:50:29.487-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2009-08-14T07:50:39.54-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2009-12-30T02:31:28.953-08:00",
      "Id": -2147483648
    },
    {
      "Message": "I'm considering what the public interface for this feature would look like.  Right now I'm thinking; \r\n\r\n - ZipEntry ZipFile.AddRawEntry(byte[]) - add a raw entry using input from the specified byte array \r\n - ZipEntry ZipFile.AddRawEntry(stream) - add a raw entry using input from the specified stream\r\n - ZipEntry ZipFile.AddRawEntry(ZipEntry) - add a raw entry using a ZipEntry from a different ZipFile\r\n - Stream ZipEntry.OpenReaderRaw() - opens and returns a Stream that allows the app to read the raw entry data without decrypting or decompressing\r\n  - void ZipEntry.WriteRaw(stream) - writes the raw entry data (neither decrypted nor decompressed) to the provided stream\r\n\r\nThe contents of the raw entry would be raw zip data, a snip from a binary zip file. It would include all the zipentry header data (metadata like entry name, timestamp, file attributes, size (uncompressed and compressed)), and also the maybe-compressed, maybe-encrypted file data.  \r\n\r\nWhen adding an entry this way, it wouldn't be possible or correct to provide a name for the entry to be added, because the name is specified in the raw data.  \r\n\r\nTo \"merge\" zips using the stream APIs, you'd open ZipFile instances on two or more .zip archives.  One will be the destination and the other(s) will be source.  For each zipentry you want to copy from source to destination, you would do this:\r\n\r\n  using (Stream s = source[entryName].OpenReaderRaw()) {\r\n    destination.AddRawEntry( s ); \r\n  }\r\n\r\n  \r\nThe AddRawEntry() method that accepts a ZipEntry would just be a wrapper around that, allowing this simplified programming model: \r\n\r\n  destination.AddRawEntry(source[entryName]);\r\n\r\nor, for merging all entries in one zip into another: \r\n\r\n  foreach (ZipEntry e in source) \r\n  {\r\n    destination.AddRawEntry(e); \r\n  }\r\n\r\nYou would need to call desintation.Save() before closing the source ZipFile. \r\n\r\nLooking for feedback on this.\r\nHow valuable would it be?  And, is the proposed programming model the correct one?\r\n",
      "PostedDate": "2010-01-01T11:27:07.64-08:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2010-01-21T07:52:39.21-08:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2010-01-22T16:29:08.54-08:00",
      "Id": -2147483648
    },
    {
      "Message": "This would be an ideal solution to one of our biggest performance problems in a server application I'm working with. We continuously read data from a source zip-file, only to zip them up with exactly the same metadata in another zipfile. I think I would use the AddRawEntry(ZipEntry) version of the API. ",
      "PostedDate": "2010-01-22T16:35:25.363-08:00",
      "Id": -2147483648
    },
    {
      "Message": "This feature would help alot in case when merging archives generated on unix and windows platforms. There is a problem finding a tool capable of merging archives and keeping file attributes. ",
      "PostedDate": "2010-03-18T11:20:33.88-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2010-03-19T02:26:09.147-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2010-03-22T05:56:55.883-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2011-03-15T10:58:12.37-07:00",
      "Id": -2147483648
    },
    {
      "Message": "Your proposed programming model looks good.",
      "PostedDate": "2011-03-15T11:01:08.853-07:00",
      "Id": -2147483648
    },
    {
      "Message": "You can check patch file ID:6214 (Source Code -> Pathes). ",
      "PostedDate": "2011-04-11T04:34:11.273-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2011-06-13T12:28:19.54-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2011-06-21T18:07:12.443-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2011-11-11T02:07:35.39-08:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2012-02-14T00:58:52.9-08:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2013-02-21T18:44:13.797-08:00",
      "Id": -2147483648
    },
    {
      "Message": "There is another use case that would make this functionality very helpful.  I am receiving data from thousands of streams simultaneously.  Currently I am saving that data to MemoryStreams and writing it at the end of processing as a set of ZipFile entries.  I would like to save memory by compressing those streams as they're being received, and storing the 85% compressed version in MemoryStreams rather than the full data.\r\n\r\nI should be able to wrap a MemoryStream in a ZlibStream and then write that MemoryStream to a zip file once processing is complete.  The above AddRawEntry functionality would accomplish this nicely, avoiding a recompress.\r\n\r\nThis would not only save memory but would allow me to multithread the compression (one thread per stream, multiple streams at a time, as opposed to multiple threads per stream, one stream at a time as is currently available).\r\n\r\nCan we revisit getting this functionality added in an upcoming release?",
      "PostedDate": "2014-09-05T08:16:11.247-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2014-09-05T08:16:28.213-07:00",
      "Id": -2147483648
    }
  ]
}