Data De-Duplication with Adaptive Chunking and Accelerated Modification Identifying
2016 (English)In: Computing and informatics, ISSN 1335-9150, Vol. 35, no 3, 586-614 p.Article in journal (Refereed) Published
The data de-duplication system not only pursues the high de-duplication rate, which refers to the aggregate reduction in storage requirements gained from de-duplication, but also the de-duplication speed. To solve the problem of random parameter-setting brought by Content Defined Chunking (CDC), a self-adaptive data chunking algorithm is proposed. The algorithm improves the de-duplication rate by conducting pre-processing de-duplication to the samples of the classified files and then selecting the appropriate algorithm parameters. Meanwhile, FastCDC, a kind of content-based fast data chunking algorithm, is adopted to solve the problem of low de-duplication speed of CDC. By introducing de-duplication factor and acceleration factor, FastCDC can significantly boost de-duplication speed while not sacrificing the de -duplication rate through adjusting these two parameters. The experimental results demonstrate that our proposed method can improve the de -duplication rate by about 5 %, while FastCDC can obtain the increase of de -duplication speed by 50 % to 200 % only at the expense of less than 3 % de duplication rate loss.
Place, publisher, year, edition, pages
Slovak Academy of Sciences Institute of Informatics , 2016. Vol. 35, no 3, 586-614 p.
Data de-duplication, self-adaptive, FastCDC
IdentifiersURN: urn:nbn:se:liu:diva-131589ISI: 000382272600004OAI: oai:DiVA.org:liu-131589DiVA: diva2:974633
Funding Agencies|National Key Technology R D Program [2011BAH04B03, 2016YFB1000303]; NSFC ; Marie Curie IRSES Actions of the European Union Seventh Framework Program (EU-FP7 Contract) 2016-09-272016-09-272016-10-03Bibliographically approved