- Introduced a new command-line argument `--csv_name` to allow custom CSV filename for evaluation results
- Updated `compute_all()` function to use the dynamically specified CSV filename
- Improved flexibility of evaluation script by enabling users to specify output file name
- Maintained existing logging and error handling mechanisms
- Added detailed comments to clarify the steps involved in the normalization of mesh vertices.
- Implemented calculations for the bounding box size and center point of the model.
- Updated the vertex normalization logic to center and scale vertices to the range of [-0.9, 0.9].
- Improved code readability and maintainability by providing clear explanations for each processing step.
- Introduced `FeatureSampleConfig` class to encapsulate configuration parameters for feature sampling from command line arguments.
- Implemented methods for parsing and validating input parameters, ensuring required fields are provided.
- Updated `Source.cpp` to utilize the new configuration class, streamlining command line argument handling and improving code readability.
- Enhanced error handling for missing or invalid parameters, promoting robustness in the feature sampling process.
- Expanded the research sections in idea.md to include new innovation points and expected outcomes for noise robustness and feature preservation.
- Introduced a Dockerfile for setting up the pre-processing environment with necessary packages.
- Added a .gitignore file to exclude training data and raw input directories from version control.
- Updated README.md with clearer instructions and code formatting for better usability.
5 months ago
8 changed files with 1048 additions and 868 deletions
parser.add_argument('--name_list',type=str,default='broken_bullet_name.txt',help='names of models to be evaluated, if you want to evaluate the whole dataset, please set it as all_names.txt')
cxxopts::Optionsoptions("FeaturedModelPointSample","Point Sampling program for featured CAD models (author: Haoxiang Guo, Email: guohaoxiangxiang@gmail.com)");
@ -10,11 +10,15 @@ Please first download the prepared ABC dataset from [BaiduYun](https://pan.baidu
**\[Optional\]** If you want to split the models and generate the correponding *.fea files from the raw ABC dataset, please first put the *.yml and *.obj files in folder _abc_data_ (make sure that file in different formats share the same prefix). Install the PyYAML package via:
$ pip install PyYAML
```
$ pip install PyYAML
```
and run:
$ python split_and_gen_fea.py
```
$ python split_and_gen_fea.py
```
You will find the split models and *.fea files in _raw_input_.
@ -22,39 +26,55 @@ You will find the split models and *.fea files in _raw_input_.
Please first install the Boost and eigen3 library:
$ sudo apt install libboost-all-dev
$ sudo apt install libeigen3-dev
```
$ sudo apt install libboost-all-dev
$ sudo apt install libeigen3-dev
```
Then run:
$ cd PATH_TO_NH-REP/code/pre_processing
$ mkdir build && cd build
$ cmake ..
$ make
```
$ cd PATH_TO_NH-REP/code/pre_processing
$ mkdir build && cd build
$ cmake ..
$ make
```
You can generate the training data:
$ cd ..
$ python gen_training_data_yaml.py
```
$ cd ..
$ python gen_training_data_yaml.py
```
The generated training data can be found in _training_data_ folder.
If you do not have a yaml file and want to generate sample points from meshes, you can prepare the *.fea file as sharp feature curves of the meshes, then run:
$ python gen_training_data_mesh.py
```
$ python gen_training_data_mesh.py
```
Please make sure that you set 'in_path' in _gen_training_data_yaml.py_ and _gen_training_data_mesh.py_ as the path containing the *.fea files.
When patch decomposition is conducted (like model 00007974_5), there will be *fixtree.obj and *fixtree.fea in _training_data_, which can be used for generating point samples in later round:
$ python gen_training_data_yaml.py -r
```
$ python gen_training_data_yaml.py -r
```
or
$ python gen_training_data_mesh.py -r
```
$ python gen_training_data_mesh.py -r
```
You can find the generated training data of the decomposed patch in _training_data_repair_. By default we only decompose one patch and it is enough for most models. But if you find *fixtree.obj and *fixtree.fea in _training_data_repair_, that means that more patches need to decomposed. There are two ways to achieve this. First, you can copy _training_data_repair./*fixtree.obj_ and _training_data_repair./*fixtree.fea_ to _training_data_, and re-run 'python gen_training_data_yaml.py -r', until enough patches are decomposed (i.e. *.conf files can be found in _training_data_repair_). Another way is to decompose all patches at once, to achieve this, simple uncomment the following line in _FeatureSample/helper.cpp_:
After that, rebuild the executable files, and re-run 'python gen_training_data_yaml.py' and 'python gen_training_data_yaml.py -r'. There will be generated training data in _training_data_repair_.
docker build -t brep_pre_processing:v1 .
docker run -it --name brep_processor -v ~/NH-Rep/code/pre_processing:/app brep_pre_processing:v1