
AI researcher exploring the emergence of intelligence through the lens of computational neuroscience. Fascinated by how biological principles can guide the development of more efficient, adaptive artificial systems.
A 四字熟語 (yojijukugo): a four-character idiom from Japanese-rooted in Kegon Buddhism, meaning the complete and unimpeded interpenetration of all phenomena.
This philosophy resonates deeply with my research: understanding intelligence as an emergent property of interconnected systems, where insights from neuroscience, mathematics, and computation flow freely to inform one another.
Developed continuous-time recurrent neural networks that reproduce key patterns observed in biological motor control. By applying simple architectural and training constraints inspired by neuroscience, the system naturally develops brain-like dynamics.
The resulting controller exhibits modularity, composing complex movements from simpler primitives. It maintains orthogonal subspaces for preparation and execution phases, reducing interference and enabling rapid adaptation to changes in dynamics. Feedback is used primarily for corrections rather than continuous control, mirroring biological motor systems.
This work demonstrates how biologically grounded constraints can regularize learning, improve generalization beyond training distributions, and produce more interpretable, robust control systems.
Evaluated a popular claim that one unified mechanism explains grid-like spatial codes across both artificial and biological neural networks. Using large-scale electrophysiological recordings from navigating animals, I found important caveats when confronting theory with biological data.
Key assumptions, such as strong translational invariance and specific tuning shapes, do not reliably hold in biological systems. The effective dimensionality and geometric structure of neural representations are more complex than simple theoretical models suggest.
For AI, this highlights the importance of being cautious with one-size-fits-all inductive biases. Context, training regimes, and task demands matter significantly. Multiple mechanisms may be involved, and biologically testable constraints should guide modeling choices rather than elegant but oversimplified theories.
Using dimensionality reduction, topological data analysis, and representation-learning tools, I showed that place cell activity is constrained to far lower-dimensional manifolds than if cells formed arbitrary receptive fields across environments.
The effective state space is small, stable, and has a geometry that makes decoding position and planning trajectories straightforward. This compression isn't just efficient, it's functionally essential for reliable localization and memory.
Implications for AI: learning state spaces that are low-dimensional, stable, and task-aligned can dramatically boost sample efficiency, generalization, and downstream control. Rather than learning in high-dimensional observation spaces, systems should discover compact representations that preserve task-relevant structure.
Developed a detailed simulation of the Asynchronous Delta Modulator (ADM) neuromorphic front-end, which converts continuous sensory signals into sparse, event-based spike representations.
Used this simulation to study feature extraction strategies for spiking neural networks (SNNs), exploring how temporal coding and event-driven processing can enable more efficient computation compared to traditional frame-based approaches.
This work contributes to understanding how neuromorphic hardware can bridge biological inspiration with practical AI systems, particularly for edge computing and energy-efficient inference.
Developed a deep reinforcement learning agent that learns patient-specific ablation patterns for treating atrial fibrillation. Current surgical approaches follow predefined lesion patterns without accounting for individual atrial geometry, leading to suboptimal outcomes and high recurrence rates.
The system uses a CNN-guided DQN to explore the anatomy-function landscape, combining fibrosis structure from MRI with electrophysiological simulations, to propose sparse, effective lesion patterns. The agent learns to reliably terminate rotors and markedly lower simulated recurrence rates.
Crucially, the learned policies generalize across patients, demonstrating how learning from physiological structure can outperform rule-based clinical planning. This work illustrates the potential of bio-inspired RL in personalized medicine.
As CTO of Dextrial Solutions, I'm responsible for building the technical infrastructure and platform architecture that streamlines the clinical trial recruitment process. Clinical trials often struggle with patient enrollment, leading to delays and increased costs.
Our platform uses AI to match patients with appropriate trials based on eligibility criteria, medical history, and geographic location. I'm currently exploring ShikanaEvolve to improve distributed AI orchestration, enabling more efficient coordination across multiple data sources and stakeholders.
Beyond the technical implementation, I manage DevOps, ensuring scalable, secure deployment of our services while maintaining compliance with healthcare data regulations.
Vesicle is a lightweight desktop application built with Rust and Tauri that enables seamless local AI inference. Named after synaptic vesicles, the biological structures that transmit signals between neurons, it embodies the idea of rapid, efficient information transfer.
The application can be invoked with a single keyboard shortcut and automatically ingests any highlighted text into the prompt, making it effortless to query AI models without context switching. All inference runs locally, ensuring privacy and low latency.
This project reflects my interest in making AI tools more accessible and integrated into daily workflows, while maintaining the efficiency and control that local execution provides.
Collaborated on creating an interactive AI exhibit at the HF0 incubator house, designed to make complex concepts around artificial intelligence and emergence accessible to a broader audience.
The exhibit explored how intelligence emerges from simple rules and interactions, drawing parallels between biological neural networks and artificial systems. Visitors could interact with live demonstrations and visualizations that illustrated key principles of learning, adaptation, and self-organization.
This project combined my technical expertise with a passion for science communication, bridging the gap between cutting-edge research and public understanding.
My team won AutoGPT's Beat, Build, Break hackathon by developing a comprehensive evaluation platform for agentic AI systems. As AI agents become more autonomous and complex, robust evaluation frameworks are essential to ensure reliability, safety, and alignment with intended goals.
Our platform provides systematic testing across multiple dimensions: task completion, reasoning quality, error recovery, and behavioral consistency. It enables developers to identify failure modes and iteratively improve agent architectures.
This project reflects my belief that as we build more capable AI systems, we must simultaneously develop rigorous methods to understand and validate their behavior.
As technical lead, I guided my team to a top-3 finish at the MIT × Google hackathon focused on sustainable e-commerce. The challenge was to develop AI-driven solutions that reduce environmental impact while maintaining business viability.
We built a system that optimizes supply chain logistics, predicts demand to minimize overproduction, and recommends sustainable alternatives to consumers. The solution balanced multiple objectives: reducing carbon footprint, minimizing waste, and maintaining user experience.
This project demonstrated how AI can be a powerful tool for addressing real-world sustainability challenges, aligning economic incentives with environmental responsibility.
I served as President of the Biomedical Engineering Association at ETH Zurich for two years, where I established a seminar series bringing together leading voices from industry and academia. This initiative created a bridge between cutting-edge research and practical applications, fostering dialogue and collaboration across disciplines.
Beyond academia, I'm passionate about travel and have explored 49 countries, each journey deepening my appreciation for diverse perspectives and cultures. Photography serves as my lens for capturing these experiences, from minimalist architectural compositions to sweeping natural landscapes.
I'm particularly drawn to Japanese culture and have spent approximately eight months in Japan. I'm currently studying Japanese at intermediate level, taking the highest-level course offered by Imperial College London. My aspiration is to work in Japan, combining my research interests with immersion in a culture that values both innovation and tradition.
Developing a theory for reward-agnostic control that leverages the inherent structure of observation and action spaces. For discrete spaces, observations and actions form graphs where edges represent transitions. For continuous spaces, they lie on manifolds with actions represented by metrics on tangent spaces.
The approach involves learning latent representations that preserve transition structures, training FiLM-conditioned dynamics models to predict state evolution, and using conditional diffusion processes to map starting states to target states guided by learned dynamics rather than rewards. This enables planning by sampling and ranking trajectories based on terminal proximity and dynamics likelihood.
The brain exhibits functional segregation; specific neural circuits handle specific functions. This contrasts with the current AI paradigm of scaling monolithic models. I believe small, specialized models working in distributed systems offer a more sustainable and efficient path forward.
Recent advances like GraphMERT for knowledge graph extraction, Hierarchical Reasoning Models, RetNet for low-latency inference, Mamba-style state-space models for long-context processing, and Liquid Neural Networks for adaptive temporal reasoning demonstrate the viability of compact specialist architectures. Neurosymbolic methods and parameter-efficient fine-tuning enable these models to collaborate, scaling through composition rather than parameter count.
I'm convinced that analog, neuromorphic, and optical computing will be vital for energy-efficient AI. Current digital architectures are reaching fundamental limits in power efficiency. Neuromorphic systems that process information through sparse, event-driven spikes can achieve orders of magnitude better energy efficiency. Optical computing promises massive parallelism and speed for specific operations. Analog computing, long dismissed, is experiencing a renaissance for certain classes of problems. The future of AI will likely be heterogeneous, matching computational substrates to task requirements.








