Keynote Talks
CANDAR Keynote
Keynote1
- Chair: Victor Parque (Hiroshima University)
- Speaker: Yusheng Ji (NII)
- Title: Network Resource Management with Distributed Intelligence
- Abstract: To meet the heterogeneous service demands of future applications, ranging from holographic communications, massive digital-twins, to large scale Internet of Things, next generation mobile networks (6G) will require unprecedented management agility, stringent quality-of-service guarantees, and real-time adaptability. To cope with the increasing control complexity and physical constraints of wide-area deployments, 6G networks will evolve to be AI-native by design, integrating machine-learning (ML) capabilities through layers of the architecture. In this talk we focus on how distributed intelligence can deliver efficient network-resource management at scale. We highlight some use cases for harnessing distributed intelligence to unlock edge devices’ potential, demonstrating tangible gains in throughput, energy efficiency, and reduced control overhead across large-scale systems.
Keynote2
- Chair: Michihiro Koibuchi (National Institute of Informatics)
- Speaker: Yoshifumi Ujibashi (Fujitsu Limited)
- Title: AI computing broker: An AI Processing Optimization Middleware
- Abstract: The explosive growth in demand for generative AI has created unprecedented requirements for GPUs on which AI processing is executed.However, the consequent surge in data center power consumption and increased hardware costs pose an urgent challenge to sustainability and operational efficiency. To address this global societal issue, we have developed “AI Computing Broker,” a middleware technology designed to automatically optimize computations for AI workloads running on GPU servers. This technology features the “Adaptive GPU Allocator” that dynamically allocates GPU resources in real-time to AI processes with high GPU utilization. Unlike conventional job-level allocation, our technology optimizes GPU allocation by dynamically assigning resources based on our processing-level allocation technique. This allows us to maximize the utilization of GPU computational resources, significantly enhancing GPU operational efficiency. This talk introduces technical details of AI Computing Broker, its practical implications, and its use cases for AI infrastructure and AI services.
Keynote3
- Chair: Jacir L. Bordim (University of Brasilia)
- Speaker: Koji Nakano (Hiroshima University)
- Title: QUBO++: A Native-HUBO, High-Performance C++ Library with a Multi-GPU Solver
- Abstract: QUBO++ is a high-performance C++ library for higher-order unconstrained binary optimization (HUBO) that integrates symbolic preprocessing of HUBO expressions with three bundled solvers, including a multi-GPU acceleration engine. Unlike most toolchains that convert HUBO to QUBO via auxiliary variables—causing model blow-up, excessive memory use, and runtime overhead—QUBO++ handles HUBO natively, preserving sparsity and structure to achieve faster model construction and more efficient search on modern GPUs. Its C++-embedded domain-specific language (DSL) allows users to describe HUBO models naturally as algebraic expressions, while the compiled implementation minimizes orchestration overhead and achieves near-hardware throughput on large instances. This keynote will introduce the framework’s design and solver architecture, and present performance comparisons with representative Python-based tools. We conclude with a brief case study: deploying QUBO++ for production scheduling in a heat-treatment factory, where the system automated a task that previously required about two hours of manual operator work, generating an optimal schedule in about one minute.
