Kitsuya Azuma
  • HOME
  • PUBLICATION
  • BOOKSHELF

TECH BLOG

blog_1
blog_2
blog_3
blog_4
blog_5
blog_6
blog_7
blog_8
blog_9
blog_10
blog_11
blog_12
blog_13
blog_14
blog_15
blog_16
blog_17
blog_18
blog_19

PAPERS

BlazeFL: Fast and Deterministic Federated Learning Simulation

BlazeFL: Fast and Deterministic Federated Learning Simulation

FedVision at CVPR 2026 (CVPRW)2026

We propose BlazeFL, a lightweight framework for fast and reproducible single-node federated learning simulation. By leveraging free-threaded shared-memory execution and client-isolated random number generators, it achieves up to 3.1× speedup while ensuring bitwise-identical results across repeated runs.

Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation

Soft-Label Caching and Sharpening for Communication-Efficient Federated Distillation

IEEE Transactions on Mobile Computing2026

We propose SCARLET, a communication-efficient distillation-based Federated Learning framework. By caching soft-labels to minimize redundant transmission and utilizing an enhanced Entropy Reduction Aggregation, it achieves up to 50% communication cost reduction while maintaining competitive accuracy.

TALKS

Beyond Multiprocessing: A Real-World ML Workload Speedup with Python 3.13+ Free-Threading

PyCon JP 2025Talk2025

Notebook as a Service in Practice with a Lab Server and Kubeflow

CloudNative Days Summer 2025LT2025

Rootless Containers for Everyone: Safe Container Management Even on Lab Servers

Students and Working Adults Lightning Talk Meetup #2LT2024

VIDEOS

Why Engineers Should Absolutely Attend Tech Conferences.
TECH WORLD2026
Day1 Ran 5 Beyond Multiprocessing A Real World ML Workload Speedup with Python 3 13+ Free Threading
PyCon JP2025
Day1 Ran 5 Beyond Multiprocessing A Real World ML Workload Speedup with Python 3 13+ Free Threading

Day1 Ran 5 Beyond Multiprocessing A Real World ML Workload Speedup with Python 3 13+ Free Threading

Kitsuya Azuma