- Implemented `tracing()` method in training pipeline to export model as TorchScript
- Created dedicated TorchScript directory in checkpoints path
- Added model tracing with example input tensor
- Saved traced model as `model_h.pt` in the TorchScript directory
- Updated Docker run command with improved GPU and user settings
- Modified learning rate scheduler to support epoch-based adjustments
- Refactored loss computation to return detailed loss components
- Added TensorBoard logging for individual loss components
- Implemented checkpoint saving mechanism with configurable frequency
- Updated training script to use dynamic configuration and improved error handling
- Implemented `eikonal_loss` to compute gradient regularization for non-manifold points
- Added `offsurface_loss` to penalize points far from the surface with near-zero predictions
- Introduced `consistency_loss` to enforce prediction consistency between manifold and non-manifold points
- Added `position_loss` method to calculate manifold loss using mean absolute value
- Implemented `normals_loss` method to compute normal vector loss with gradient calculation
- Updated `train.py` to log input and output tensor shapes during training
- Modified network architecture in `train.py` by increasing hidden layer dimensions from [64, 64, 64] to [256, 256, 256]
- Created `loss.py` with a `LossManager` class to handle loss calculation
- Integrated `LossManager` into the training pipeline in `train.py`
- Implemented a basic manifold loss computation using mean absolute value
- Added `data_loader.py` with `NHREP_Dataset` class for loading point cloud, feature mask, and CSG tree data
- Implemented `CustomDataLoader` for flexible data loading with configurable parameters
- Refactored `train.py` to create a structured training pipeline for NHRepNet
- Added support for feature sampling, device selection, and TensorBoard logging
- Introduced modular training methods with error handling and logging
- Modified default `exc_info` parameter to `False` in logger's error method
- Changed default file log level from DEBUG to INFO in LogConfig
- Improved default error logging behavior to reduce unnecessary stack trace output
- Added comprehensive inline comments explaining each step of the training process in the `run_nhrepnet_training` method
- Improved code structure by adding descriptive comments for variable initializations and key computational steps
- Enhanced code readability by breaking down complex operations with clear explanatory comments
- Maintained existing functionality while providing better code documentation
- Replaced print statements with logger.info() in ReconstructionRunner class
- Added logging for input and output tensor shapes in NHRepNet forward method
- Improved logging consistency and added docstring for network forward method
- Introduced a new command-line argument `--csv_name` to allow custom CSV filename for evaluation results
- Updated `compute_all()` function to use the dynamically specified CSV filename
- Improved flexibility of evaluation script by enabling users to specify output file name
- Maintained existing logging and error handling mechanisms
- Added detailed comments to clarify the steps involved in the normalization of mesh vertices.
- Implemented calculations for the bounding box size and center point of the model.
- Updated the vertex normalization logic to center and scale vertices to the range of [-0.9, 0.9].
- Improved code readability and maintainability by providing clear explanations for each processing step.
- Introduced `FeatureSampleConfig` class to encapsulate configuration parameters for feature sampling from command line arguments.
- Implemented methods for parsing and validating input parameters, ensuring required fields are provided.
- Updated `Source.cpp` to utilize the new configuration class, streamlining command line argument handling and improving code readability.
- Enhanced error handling for missing or invalid parameters, promoting robustness in the feature sampling process.
- Expanded the research sections in idea.md to include new innovation points and expected outcomes for noise robustness and feature preservation.
- Introduced a Dockerfile for setting up the pre-processing environment with necessary packages.
- Added a .gitignore file to exclude training data and raw input directories from version control.
- Updated README.md with clearer instructions and code formatting for better usability.
- Introduced a new function `load_and_process_single_model` to encapsulate the logic for evaluating a single model, enhancing code readability and maintainability.
- Updated `compute_all` to utilize the new function, streamlining the overall evaluation workflow.
- Improved error handling with logging for missing files and exceptions during processing.
- Enhanced caching mechanism for computed results to avoid redundant calculations.
- Added detailed comments and documentation for better understanding of the evaluation process.
- Consolidated and updated entries to exclude unnecessary files and directories, including build artifacts, logs, and environment files.
- Added specific patterns for temporary editor files and OS-specific files to reduce clutter in the repository.
- Enhanced clarity by grouping related exclusions, ensuring a cleaner and more maintainable .gitignore structure.
- Added the environment variable `TZ=Asia/Shanghai` to the Docker run command for better timezone handling.
- This change improves the usability of the Docker setup for users in the specified timezone.
- Changed the permissions of build.sh to make it executable.
- Simplified the cmake command in build.sh by removing hardcoded paths for CUDNN.
- Updated CMakeLists.txt in console_pytorch and evaluation directories to reflect a new CMAKE_PREFIX_PATH for libtorch, ensuring compatibility with the current workspace structure.
- Added 'libtorch/' to prevent tracking of specific library files.
- Included patterns for '*.pth' and '*.zip' to exclude model and archive files.
- Ensured proper formatting and consistency in the .gitignore file.
- Expanded the .gitignore file to include CMake-related files and directories, such as CMakeCache.txt, CMakeFiles/, and build/ to prevent tracking of build artifacts.
- Added exclusions for log files and IDE-specific files, including .vscode/ and .idea/, to streamline the repository and avoid clutter from temporary files.
- Retained existing exclusions for CSV files while ensuring proper formatting in the .gitignore.
- Added project root directory setup to ensure consistent file paths.
- Integrated a custom logger for enhanced logging capabilities.
- Updated argument parsing to use absolute paths for ground truth and prediction data.
- Improved documentation for distance functions and added a new function to compute feature distances and angle differences.
- Refactored file reading to use context management for better resource handling.
- Added 'exps/' and 'summary/' directories to the .gitignore file to prevent tracking of experimental and summary data.
- Retained existing exclusion for 'data/' while removing the specific exclusion for 'data/*' to allow for potential subdirectory tracking.
- Split the initialization process into multiple private methods for better readability and maintainability.
- Added detailed logging for each step of the initialization process, including error handling for missing parameters and file loading issues.
- Enhanced configuration and directory setup with clearer error messages and structured logging.
- Improved data loading methods to handle both single and list-based data inputs more robustly.
- Introduced methods for setting up the CSG tree and computing local sigma values, with appropriate logging for each operation.
- Introduced a `Logger` class that implements a singleton pattern for logging.
- Added a `ColoredFormatter` to provide colored log output based on log levels.
- Implemented methods for logging at different levels (debug, info, warning, error, exception).
- Included functionality to capture caller information and log it alongside messages.
- Created a `LogConfig` dataclass for easy configuration of logging parameters.
- Set up a global logger instance with default configuration.
- Added a `timeit` decorator for measuring function execution time.