PAPERS

BlazeFL: Fast and Deterministic Federated Learning Simulation
FedVision at CVPR 2026 (CVPRW)2026
We propose BlazeFL, a lightweight framework for fast and reproducible single-node federated learning simulation. By leveraging free-threaded shared-memory execution and client-isolated random number generators, it achieves up to 3.1× speedup while ensuring bitwise-identical results across repeated runs.

Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation
IEEE Transactions on Mobile Computing2026
We propose SCARLET, a communication-efficient distillation-based Federated Learning framework. By caching soft-labels to minimize redundant transmission and utilizing an enhanced Entropy Reduction Aggregation, it achieves up to 50% communication cost reduction while maintaining competitive accuracy.



















