Enhance idea.md with new research points and implementation suggestions; add Dockerfile and .gitignore for pre-processing setup
- Expanded the research sections in idea.md to include new innovation points and expected outcomes for noise robustness and feature preservation.
- Introduced a Dockerfile for setting up the pre-processing environment with necessary packages.
- Added a .gitignore file to exclude training data and raw input directories from version control.
- Updated README.md with clearer instructions and code formatting for better usability.
@ -10,11 +10,15 @@ Please first download the prepared ABC dataset from [BaiduYun](https://pan.baidu
**\[Optional\]** If you want to split the models and generate the correponding *.fea files from the raw ABC dataset, please first put the *.yml and *.obj files in folder _abc_data_ (make sure that file in different formats share the same prefix). Install the PyYAML package via:
**\[Optional\]** If you want to split the models and generate the correponding *.fea files from the raw ABC dataset, please first put the *.yml and *.obj files in folder _abc_data_ (make sure that file in different formats share the same prefix). Install the PyYAML package via:
$ pip install PyYAML
```
$ pip install PyYAML
```
and run:
and run:
$ python split_and_gen_fea.py
```
$ python split_and_gen_fea.py
```
You will find the split models and *.fea files in _raw_input_.
You will find the split models and *.fea files in _raw_input_.
@ -22,39 +26,54 @@ You will find the split models and *.fea files in _raw_input_.
Please first install the Boost and eigen3 library:
Please first install the Boost and eigen3 library:
$ sudo apt install libboost-all-dev
```
$ sudo apt install libeigen3-dev
$ sudo apt install libboost-all-dev
$ sudo apt install libeigen3-dev
```
Then run:
Then run:
$ cd PATH_TO_NH-REP/code/pre_processing
```
$ mkdir build && cd build
$ cd PATH_TO_NH-REP/code/pre_processing
$ cmake ..
$ mkdir build && cd build
$ make
$ cmake ..
$ make
```
You can generate the training data:
You can generate the training data:
$ cd ..
```
$ python gen_training_data_yaml.py
$ cd ..
$ python gen_training_data_yaml.py
```
The generated training data can be found in _training_data_ folder.
The generated training data can be found in _training_data_ folder.
If you do not have a yaml file and want to generate sample points from meshes, you can prepare the *.fea file as sharp feature curves of the meshes, then run:
If you do not have a yaml file and want to generate sample points from meshes, you can prepare the *.fea file as sharp feature curves of the meshes, then run:
$ python gen_training_data_mesh.py
```
$ python gen_training_data_mesh.py
```
Please make sure that you set 'in_path' in _gen_training_data_yaml.py_ and _gen_training_data_mesh.py_ as the path containing the *.fea files.
Please make sure that you set 'in_path' in _gen_training_data_yaml.py_ and _gen_training_data_mesh.py_ as the path containing the *.fea files.
When patch decomposition is conducted (like model 00007974_5), there will be *fixtree.obj and *fixtree.fea in _training_data_, which can be used for generating point samples in later round:
When patch decomposition is conducted (like model 00007974_5), there will be *fixtree.obj and *fixtree.fea in _training_data_, which can be used for generating point samples in later round:
$ python gen_training_data_yaml.py -r
```
$ python gen_training_data_yaml.py -r
```
or
or
```
$ python gen_training_data_mesh.py -r
```
$ python gen_training_data_mesh.py -r
You can find the generated training data of the decomposed patch in _training_data_repair_. By default we only decompose one patch and it is enough for most models. But if you find *fixtree.obj and *fixtree.fea in _training_data_repair_, that means that more patches need to decomposed. There are two ways to achieve this. First, you can copy _training_data_repair./*fixtree.obj_ and _training_data_repair./*fixtree.fea_ to _training_data_, and re-run 'python gen_training_data_yaml.py -r', until enough patches are decomposed (i.e. *.conf files can be found in _training_data_repair_). Another way is to decompose all patches at once, to achieve this, simple uncomment the following line in _FeatureSample/helper.cpp_:
You can find the generated training data of the decomposed patch in _training_data_repair_. By default we only decompose one patch and it is enough for most models. But if you find *fixtree.obj and *fixtree.fea in _training_data_repair_, that means that more patches need to decomposed. There are two ways to achieve this. First, you can copy _training_data_repair./*fixtree.obj_ and _training_data_repair./*fixtree.fea_ to _training_data_, and re-run 'python gen_training_data_yaml.py -r', until enough patches are decomposed (i.e. *.conf files can be found in _training_data_repair_). Another way is to decompose all patches at once, to achieve this, simple uncomment the following line in _FeatureSample/helper.cpp_:
After that, rebuild the executable files, and re-run 'python gen_training_data_yaml.py' and 'python gen_training_data_yaml.py -r'. There will be generated training data in _training_data_repair_.
After that, rebuild the executable files, and re-run 'python gen_training_data_yaml.py' and 'python gen_training_data_yaml.py -r'. There will be generated training data in _training_data_repair_.
docker run -it --name brep_processor -v ~/NH-Rep/code/pre_processing:/app brep_pre_processing:v1