Official Code for the paper accepted to IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
Ensure you have Conda installed. Then, create and activate the environment using the provided file:
conda env create --name env.fl --file=environment.yml
conda activate env.flFirst, set up the HuggingFace Accelerate configuration:
cp accelerate_default_config.yaml ~/.cache/huggingface/accelerate/default_config.yamlNext, launch the training script for the CIFAR-100 dataset:
bash run-cifar100.sh.
βββ algorithms/
β βββ engine/ # Federated learning coordination logic
β βββ solver/ # Local training procedures
βββ config/ # YAML configuration files
βββ data/ # Dataset cache directory
βββ log/ # Output logs and saved results
βββ model/ # Model definitions
βββ utils/ # Utility functions
βββ main.py # Entry point for training
βββ test.py # Evaluation and testing routines
If you find this work useful for your research, please cite our paper:
@article{zhang2025fed,
title={Fed-hello: Efficient federated foundation model fine-tuning with heterogeneous lora allocation},
author={Zhang, Zikai and Liu, Ping and Xu, Jiahao and Hu, Rui},
journal={IEEE Transactions on Neural Networks and Learning Systems},
year={2025},
publisher={IEEE}
}
For any questions or suggestions, please feel free to open an issue on this repository or contact the authors directly.