Today's technology allows us to capture and store huge amounts of data, whatever their origin or nature. In recent years and in many fields, digital data at a frenetic pace have been collecting, being saved in a increasingly number of electronic datasets. In this context, so much time and effort is spent to develop new Computational theories, techniques, methods and tools for modeling the systems which data come from. A fundamental problem to face is the inherent imperfection of the real-world information stored in datasets. Although a lot of methods and techniques are based on the assumption that perfect data are given, the truth is that in the reality the data are never as good as the engineers would like to. Often, data suffer damage whose influence affect the interpretations made about it. If the theories, techniques and methods do not take into account the imperfection of information, the models generated are low quality models, defective or unnecessarily complex. Finally, this affects the interpretation and decisions that people make basing on these data.
Thus, it's essential to make the data-processing taking care of the imperfection at different levels in any process made. Therefore, to alleviate part of this problem, we propose a tool, called "NIP imperfection processor", to handle datasets. With this tool you can include various types of imperfection to a datasets of known formats of literature and/or custom formats defined by the user. The use of different dataset formats let us to make transformations between them and the possibility of generate different types of imperfection.
For users who deal with data, design and construct techniques that support these kinds of imperfect data, we provide the software "NIP imperfection processor". The authors hope and wish that this tool will be useful for you.