Skip to content

TNI-playground/Fed_HeLLo

Repository files navigation

πŸ‘‹ Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation

Official Code for the paper accepted to IEEE Transactions on Neural Networks and Learning Systems (TNNLS)


πŸ§ͺ Getting Started

βš™οΈ Environment Setup

Ensure you have Conda installed. Then, create and activate the environment using the provided file:

conda env create --name env.fl --file=environment.yml
conda activate env.fl

πŸš€ Running the Code

First, set up the HuggingFace Accelerate configuration:

cp accelerate_default_config.yaml ~/.cache/huggingface/accelerate/default_config.yaml

Next, launch the training script for the CIFAR-100 dataset:

bash run-cifar100.sh

πŸ“ Project Structure

.
β”œβ”€β”€ algorithms/
β”‚   β”œβ”€β”€ engine/   # Federated learning coordination logic
β”‚   └── solver/   # Local training procedures
β”œβ”€β”€ config/         # YAML configuration files
β”œβ”€β”€ data/           # Dataset cache directory
β”œβ”€β”€ log/            # Output logs and saved results
β”œβ”€β”€ model/          # Model definitions
β”œβ”€β”€ utils/          # Utility functions
β”œβ”€β”€ main.py         # Entry point for training
└── test.py         # Evaluation and testing routines

πŸ“„ Citation

If you find this work useful for your research, please cite our paper:

@article{zhang2025fed,
  title={Fed-hello: Efficient federated foundation model fine-tuning with heterogeneous lora allocation},
  author={Zhang, Zikai and Liu, Ping and Xu, Jiahao and Hu, Rui},
  journal={IEEE Transactions on Neural Networks and Learning Systems},
  year={2025},
  publisher={IEEE}
}

πŸ“¬ Contact

For any questions or suggestions, please feel free to open an issue on this repository or contact the authors directly.

About

[TNNLS 25] πŸ‘‹ Fed-HeLLo: Efficient Federated Foundation Model Fine-Tuning with Heterogeneous LoRA Allocation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors