{
  "WorkItem": {
    "AffectedComponent": {
      "Name": "",
      "DisplayName": ""
    },
    "ClosedComment": "",
    "ClosedDate": null,
    "CommentCount": 0,
    "Custom": null,
    "Description": "Hi,\n \nI'm using library to compress folder to several segment files. I would like to process segmented files in different thread, immediately after they are saved. How can I detect, that segment was saved\n (renamed from temp file to zXX)?\n \nI was trying to use\nZipProgressEventType.Saving_AfterSaveTempArchive, Saving_BeforeRenameTempArchive and\nSaving_AfterRenameTempArchive events detection, but they are raised only if last segment is renamed from the temp file.\n \nThank you, Michal",
    "LastUpdatedDate": "2013-02-21T18:43:08.327-08:00",
    "PlannedForRelease": "",
    "ReleaseVisibleToPublic": false,
    "Priority": {
      "Name": "Low",
      "Severity": 50,
      "Id": 1
    },
    "ProjectName": "DotNetZip",
    "ReportedDate": "2011-07-30T04:20:03.99-07:00",
    "Status": {
      "Name": "Proposed",
      "Id": 1
    },
    "ReasonClosed": {
      "Name": "Unassigned"
    },
    "Summary": "Event to detect that a segment was saved?",
    "Type": {
      "Name": "Issue",
      "Id": 3
    },
    "VoteCount": 2,
    "Id": 14007
  },
  "FileAttachments": [],
  "Comments": [
    {
      "Message": "Michal, I'm just thinking through your request.  \r\n\r\nIn the simplest case, DotNetZip can emit an event when it does the rename.  There may be a problem with this... In some cases the file is renamed from the temporary name to File.zXX, but the library is not finished with that segment.  This can happen when a single entry in the archive spans multiple segments in the output. For example suppose there is an entry that results in almost 1gb of data compressed, and the segment size is 256mb.  In this case there will be 4 output segments containing the data for this entry.  The metadata for the entry and the first ~256mb of compressed data will go into File.z01, then File.z02 - File.z03 will contain only compressed data, and File.z04 will contain the final portion of compressed data (something less than 256mb), plus any metadata trailer.  After the last of the compressed data is emitted, DotNetZip needs to seek back to the first segment, to update the metadata for the entry, some of which is not known until the entry is fully compressed.  So in this case DotNetZip re-opens File.z01 and modifies it, then closes it again. \r\n\r\nYour plan to open and \"process\" the z01 file, as soon as it has been renamed, has the potential to interfere with this update, because dotnetzip may modify the file after the rename.  Even if I suppose that your app's actions will be benign, an event that is issued on rename is sort of meaningless, because the renamed file may not quite be in its final form. \r\n\r\nNow, you could say, \"ok, then have dotnetzip emit the event when it is sure there will be no more updates on a segment.\"  In that case the event has real meaning, and any read operations by the application on the segment could proceed with confidence that the segment will not see any further changes. \r\n\r\nThat seems a friendly modification.  In that case, either the events will arrive out of order, or they will arrive in bunches. Seems to me it's better to have them arrive in bunches.   I will explain the \"out of order\" issue: in the above example, File.z02 and File.z03 are complete as soon as they are renamed; there will be no updates, because they do not hold metadata.  Only File.z01 is subject to change.  If the events fire when the files are actually complete, you'd get an order like: File.z02, File.z03, File.z04, File.z01, File.z06, File.z07, File.z05, and so on.  This is the actual order in which the segmented files are finalized, but it is not very intuitive or clear to a calling application. So I am hesitant to expose that ordering to the caller. \r\n\r\nA better approach here seems to be to issue the events in bunches - or better, to issue a single event for a batch of files. In practice, it would be a \"finished with batch of segments\" event, and it would provide a range or set of filenames, like \"File.z01, File.z02, File.z03\" , that are complete at the time of the event.     \r\nI think I like this option better. In the above example, File.z04 would not be included in the first \"batch complete\" event, because while File.z04, or more accurately, the segment that will eventually be known as File.z04, contains the last bytes of the first entry, because there is more room in that segment, it will not yet have been renamed, even after the first entry has been completely saved.  Hence the event that fires will tell you File.z03 is done, after entry 1 is totally done.  \r\n\r\nI think I could do that, without too much disruption, and it would make sense for applications.\r\n\r\ninterested in your feedback.\r\n",
      "PostedDate": "2011-08-03T09:33:45.537-07:00",
      "Id": -2147483648
    },
    {
      "Message": "Thank you for your comprehensive answer.\r\n\r\nI try to write a workaround based on FileSystemWatcher class and waiting for next Saving_EntryBytesRead event. But it's not a good solution.\r\n\r\nIn my opinion, I would allow the user to choose between \"out of order\" versus \"finished with the batch of segments\". I think that different applications require different approaches. The default value can be \"finished with the batch of segments\". On the other way I agree with you that \"out of order\" approach is not very intuitive.\r\n\r\nFor example in my application I chose the approach “out of order”. The reason is that my application sent files to the cloud storage. The order is not relevant, but it is comfortable (quickest) to send files as soon as possible.\r\n\r\nThank you very much.\r\n Michal\r\n",
      "PostedDate": "2011-08-03T15:33:09.25-07:00",
      "Id": -2147483648
    },
    {
      "Message": "",
      "PostedDate": "2013-02-21T18:43:08.327-08:00",
      "Id": -2147483648
    }
  ]
}