RL Minor Embedding for Oscillator Ising Machines (44 chars)

Generated from prompt:

Master's thesis presentation based on the document 'Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning' by Shishir Siwakoti. Include slides covering title, motivation, research objectives, problem definition, hardware model, RL formulation, policy architecture, training and evaluation, results, comparisons, conclusions, and future work.

Master's thesis on RL algorithm for minor embedding in oscillator-based Ising machines to solve combinatorial optimization. Covers motivation, RL formulation with GNN policy, training, results, compar

December 8, 202513 slides
Slide 1 of 13

Slide 1 - Master's Thesis Presentation

This is the title slide for a Master's Thesis Presentation. The subtitle states: "Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti."

Master's Thesis Presentation

Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning

Speaker Notes
Include university logo and date
Slide 1 - Master's Thesis Presentation
Slide 2 of 13

Slide 2 - Presentation Agenda

The slide outlines a five-part presentation agenda on oscillator-based Ising machines and reinforcement learning. It covers Motivation & Objectives, Hardware & RL Formulation, Policy & Training, Results & Comparisons, and Conclusions & Future Work.

Presentation Agenda

  1. Motivation & Objectives
  2. Title, research motivation, objectives, and problem definition.

  3. Hardware & RL Formulation
  4. Oscillator-based Ising machine model and RL problem setup.

  5. Policy & Training
  6. Policy architecture, training procedures, and evaluation methods.

  7. Results & Comparisons
  8. Experimental results, comparisons with prior approaches.

  9. Conclusions & Future Work

Summary of findings and directions for future research. Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Slide 2 - Presentation Agenda
Slide 3 of 13

Slide 3 - Motivation

Oscillator-based Ising machines efficiently solve combinatorial optimization problems, but minor embedding challenges limit their scalability. Traditional embedding methods are suboptimal, while RL enables adaptive learning for superior embeddings.

Motivation

  • Oscillator-based Ising machines solve combinatorial optimization efficiently.
  • Minor embedding challenges limit scalability.
  • Traditional embedding methods are suboptimal.
  • RL enables adaptive learning for superior embeddings.

Source: Shishir Siwakoti, Master's Thesis

Speaker Notes
Emphasize efficiency gap and RL's potential.
Slide 3 - Motivation
Slide 4 of 13

Slide 4 - Research Objectives

The research objectives center on developing a reinforcement learning-based minor embedding algorithm optimized for oscillator-based hardware. The aims are to surpass baselines in embedding quality while minimizing solve time.

Research Objectives

  • - Develop RL-based minor embedding algorithm.
  • - Optimize for oscillator-based hardware.
  • - Surpass baselines in embedding quality.
  • - Minimize solve time versus baselines.

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Slide 4 - Research Objectives
Slide 5 of 13

Slide 5 - Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning

This slide serves as the section header for Section 04: Problem Definition. It highlights the challenge of mapping problem graphs to oscillator-based Ising machine hardware while preserving structure amid oscillator dynamics and chain issues.

Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning

04

Problem Definition

Mapping problem graphs to hardware preserving structure amid oscillator dynamics and chain challenges

Source: Master's Thesis by Shishir Siwakoti

Speaker Notes
Define minor embedding: Map problem graph to hardware graph while preserving structure. Challenges in oscillator dynamics and chain lengths.
Slide 5 - Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning
Slide 6 of 13

Slide 6 - Hardware Model

The slide describes a hardware model as a network of coupled oscillators with programmable couplings Jij, where phases θi encode spin variables σi ≈ cos(θi). It targets optimization of the Ising Hamiltonian H = -∑{i<j} Jij σi σj - ∑i hi σi, featuring Ising spins σi, couplings Jij, and local biases hi.

Hardware Model

Oscillator NetworkIsing Hamiltonian
Hardware model: Network of coupled oscillators with programmable couplings Jij between oscillators i and j. Oscillator phases θi represent spin variables, typically encoded as σi ≈ cos(θi).Optimization target: H = -∑{i<j} Jij σi σj - ∑i hi σi, where σi ∈ {-1, +1} are Ising spins, Jij are couplings, and hi are local biases.

Source: 'Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning' by Shishir Siwakoti

Speaker Notes
Highlight how oscillator phases θ_i encode spins via σ_i ≈ cos(θ_i), bridging hardware to Ising model.
Slide 6 - Hardware Model
Slide 7 of 13

Slide 7 - RL Formulation

This slide formulates reinforcement learning as an MDP, with states as a graph plus partial embedding, actions to extend embedding chains, and rewards from quality and validity metrics. The goal is to maximize long-term reward for complete embeddings.

RL Formulation

  • MDP State: Graph + partial embedding
  • Actions: Extend embedding chains
  • Rewards: Quality + validity metrics
  • Goal: Maximize long-term reward for complete embeddings

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning

Slide 7 - RL Formulation
Slide 8 of 13

Slide 8 - Policy Architecture

The "Policy Architecture" slide depicts a model with a GNN encoder that processes adjacency matrices. It includes an actor head for predicting chain extensions and a critic head for estimating state values.

Policy Architecture

!Image

  • GNN encoder processes adjacency matrices
  • Actor head predicts chain extensions
  • Critic head estimates state values

Source: Wikipedia: Graph neural network

Speaker Notes
Diagram of RL policy network: Graph Neural Network (GNN) encoder + Actor-Critic heads for actions/values. Inputs: adjacency matrices; Outputs: chain extensions.
Slide 8 - Policy Architecture
Slide 9 of 13

Slide 9 - Training and Evaluation

The timeline details training phases: pre-training on small graphs in 2022 for basic minor embedding, and curriculum learning with increasing complexity in 2023 for scalability. Evaluation on QA benchmarks, including Ising instances up to 1000 spins, occurred in 2024.

Training and Evaluation

2022: Pre-training on Small Graphs Initial RL policy training on small graph instances to learn basic minor embedding capabilities. 2023: Curriculum Learning Phase Gradual increase in graph complexity during training for enhanced scalability and generalization. 2024: Evaluation on Benchmarks Assessed performance on Ising instances up to 1000 spins using established QA benchmarks.

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Speaker Notes
Highlight progressive training from small graphs to large-scale evaluation on Ising benchmarks.
Slide 9 - Training and Evaluation
Slide 10 of 13

Slide 10 - Results

The Results slide showcases a 95%+ embedding success rate on benchmark instances. It also reports a 20% chain length reduction versus baseline methods and a 3x solve time speedup in hardware simulations.

Results

  • 95%+: Embedding Success Rate
  • On benchmark instances

  • 20%: Chain Length Reduction
  • Vs. baseline methods

  • 3x: Solve Time Speedup

Hardware simulations Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning

Speaker Notes
Highlight embedding success, chain reduction, and speedup. Graphs show improvements vs. iterations.
Slide 10 - Results
Slide 11 of 13

Slide 11 - Comparisons

The slide compares an RL policy to a greedy baseline and QUBO minor embedding methods. It achieves superior embedding density and resource efficiency versus the baseline, while outperforming QUBO in training convergence speed and embedding fidelity.

Comparisons

vs. Baseline (greedy)vs. QUBO minor embedding
The RL policy achieves superior embedding density, packing more logical qubits into the available hardware qubits compared to the simple greedy baseline approach. This leads to more efficient use of oscillator resources.Exhibits faster convergence during reinforcement learning training and delivers higher fidelity embeddings, outperforming traditional QUBO-based minor embedding methods in both speed and solution quality.

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Speaker Notes
Emphasize how the RL method outperforms baselines in density, speed, and fidelity for minor embedding on oscillator-based hardware.
Slide 11 - Comparisons
Slide 12 of 13

Slide 12 - Conclusions

RL enables efficient minor embedding for oscillator Ising machines, achieving state-of-the-art performance and validation on realistic hardware models. The slide concludes with "Thank you for your attention!"

Conclusions

• RL enables efficient minor embedding for oscillator Ising machines

  • Achieves state-of-the-art performance
  • Validates approach on realistic hardware models

Thank you for your attention!

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Speaker Notes
Key takeaways: RL achieves efficient minor embedding, state-of-the-art results, validated on realistic hardware. Closing message: 'RL paves the way for advanced Ising solvers.' Call-to-action: 'Reach out for collaborations or questions.'
Slide 12 - Conclusions
Slide 13 of 13

Slide 13 - Future Work

The "Future Work" slide outlines plans to deploy on real hardware and implement multi-graph batching. It also includes developing hybrid classical-quantum embeddings and scaling to larger problems.

Future Work

  • Deploy on real hardware.
  • Implement multi-graph batching.
  • Develop hybrid classical-quantum embeddings.
  • Scale to larger problems.

Source: Developing a Minor Embedding Algorithm for Oscillator-Based Ising Machines Using Reinforcement Learning by Shishir Siwakoti

Slide 13 - Future Work

Discover More Presentations

Explore thousands of AI-generated presentations for inspiration

Browse Presentations
Powered by AI

Create Your Own Presentation

Generate professional presentations in seconds with Karaf's AI. Customize this presentation or start from scratch.

Create New Presentation

Powered by Karaf.ai — AI-Powered Presentation Generator